2025-09-19 16:08:00.322377 | Job console starting 2025-09-19 16:08:00.332630 | Updating git repos 2025-09-19 16:08:00.467399 | Cloning repos into workspace 2025-09-19 16:08:00.699183 | Restoring repo states 2025-09-19 16:08:00.729106 | Merging changes 2025-09-19 16:08:01.232344 | Checking out repos 2025-09-19 16:08:01.488075 | Preparing playbooks 2025-09-19 16:08:02.045122 | Running Ansible setup 2025-09-19 16:08:06.327722 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-19 16:08:07.117445 | 2025-09-19 16:08:07.117633 | PLAY [Base pre] 2025-09-19 16:08:07.135054 | 2025-09-19 16:08:07.135185 | TASK [Setup log path fact] 2025-09-19 16:08:07.166354 | orchestrator | ok 2025-09-19 16:08:07.184512 | 2025-09-19 16:08:07.184648 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 16:08:07.229932 | orchestrator | ok 2025-09-19 16:08:07.243226 | 2025-09-19 16:08:07.243341 | TASK [emit-job-header : Print job information] 2025-09-19 16:08:07.294446 | # Job Information 2025-09-19 16:08:07.294649 | Ansible Version: 2.16.14 2025-09-19 16:08:07.294685 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-19 16:08:07.294718 | Pipeline: label 2025-09-19 16:08:07.294740 | Executor: 521e9411259a 2025-09-19 16:08:07.294760 | Triggered by: https://github.com/osism/testbed/pull/2768 2025-09-19 16:08:07.294782 | Event ID: caf687d0-9572-11f0-9d9f-31dff8114740 2025-09-19 16:08:07.307911 | 2025-09-19 16:08:07.308112 | LOOP [emit-job-header : Print node information] 2025-09-19 16:08:07.433125 | orchestrator | ok: 2025-09-19 16:08:07.433434 | orchestrator | # Node Information 2025-09-19 16:08:07.433471 | orchestrator | Inventory Hostname: orchestrator 2025-09-19 16:08:07.433495 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-19 16:08:07.433517 | orchestrator | Username: zuul-testbed04 2025-09-19 16:08:07.433537 | orchestrator | Distro: Debian 12.12 2025-09-19 16:08:07.433561 | orchestrator | Provider: static-testbed 2025-09-19 16:08:07.433581 | orchestrator | Region: 2025-09-19 16:08:07.433602 | orchestrator | Label: testbed-orchestrator 2025-09-19 16:08:07.433622 | orchestrator | Product Name: OpenStack Nova 2025-09-19 16:08:07.433641 | orchestrator | Interface IP: 81.163.193.140 2025-09-19 16:08:07.461703 | 2025-09-19 16:08:07.461914 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-19 16:08:07.943405 | orchestrator -> localhost | changed 2025-09-19 16:08:07.951773 | 2025-09-19 16:08:07.951932 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-19 16:08:09.005320 | orchestrator -> localhost | changed 2025-09-19 16:08:09.021107 | 2025-09-19 16:08:09.021231 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-19 16:08:09.288427 | orchestrator -> localhost | ok 2025-09-19 16:08:09.295631 | 2025-09-19 16:08:09.295758 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-19 16:08:09.327182 | orchestrator | ok 2025-09-19 16:08:09.344688 | orchestrator | included: /var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-19 16:08:09.352826 | 2025-09-19 16:08:09.352923 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-19 16:08:10.147121 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-19 16:08:10.147577 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/cd2a4281f1324a0188bc914860afae05_id_rsa 2025-09-19 16:08:10.147685 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/cd2a4281f1324a0188bc914860afae05_id_rsa.pub 2025-09-19 16:08:10.147756 | orchestrator -> localhost | The key fingerprint is: 2025-09-19 16:08:10.147840 | orchestrator -> localhost | SHA256:L3rbV1tjVduPISTbHeOolaPnT/dxBDZYHi1C3WWc92Q zuul-build-sshkey 2025-09-19 16:08:10.147899 | orchestrator -> localhost | The key's randomart image is: 2025-09-19 16:08:10.147975 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-19 16:08:10.148035 | orchestrator -> localhost | | ...o+=| 2025-09-19 16:08:10.148091 | orchestrator -> localhost | | . o+=+E| 2025-09-19 16:08:10.148144 | orchestrator -> localhost | | =.*=**| 2025-09-19 16:08:10.148196 | orchestrator -> localhost | | . B.+++| 2025-09-19 16:08:10.148249 | orchestrator -> localhost | | S + o o+| 2025-09-19 16:08:10.148311 | orchestrator -> localhost | | .o . o+o| 2025-09-19 16:08:10.148365 | orchestrator -> localhost | | . .o .o++| 2025-09-19 16:08:10.148415 | orchestrator -> localhost | | ..o o...+| 2025-09-19 16:08:10.148470 | orchestrator -> localhost | | ...... .. .| 2025-09-19 16:08:10.148521 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-19 16:08:10.148649 | orchestrator -> localhost | ok: Runtime: 0:00:00.304567 2025-09-19 16:08:10.164025 | 2025-09-19 16:08:10.164173 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-19 16:08:10.200502 | orchestrator | ok 2025-09-19 16:08:10.214944 | orchestrator | included: /var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-19 16:08:10.224980 | 2025-09-19 16:08:10.225182 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-19 16:08:10.252233 | orchestrator | skipping: Conditional result was False 2025-09-19 16:08:10.269893 | 2025-09-19 16:08:10.270058 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-19 16:08:10.887927 | orchestrator | changed 2025-09-19 16:08:10.897159 | 2025-09-19 16:08:10.897288 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-19 16:08:11.176276 | orchestrator | ok 2025-09-19 16:08:11.184686 | 2025-09-19 16:08:11.184834 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-19 16:08:11.611960 | orchestrator | ok 2025-09-19 16:08:11.620867 | 2025-09-19 16:08:11.620996 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-19 16:08:12.037384 | orchestrator | ok 2025-09-19 16:08:12.046456 | 2025-09-19 16:08:12.046584 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-19 16:08:12.071758 | orchestrator | skipping: Conditional result was False 2025-09-19 16:08:12.083500 | 2025-09-19 16:08:12.083648 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-19 16:08:12.524658 | orchestrator -> localhost | changed 2025-09-19 16:08:12.538662 | 2025-09-19 16:08:12.538778 | TASK [add-build-sshkey : Add back temp key] 2025-09-19 16:08:12.877071 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/cd2a4281f1324a0188bc914860afae05_id_rsa (zuul-build-sshkey) 2025-09-19 16:08:12.877649 | orchestrator -> localhost | ok: Runtime: 0:00:00.019343 2025-09-19 16:08:12.892695 | 2025-09-19 16:08:12.892892 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-19 16:08:13.310129 | orchestrator | ok 2025-09-19 16:08:13.318884 | 2025-09-19 16:08:13.319006 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-19 16:08:13.342864 | orchestrator | skipping: Conditional result was False 2025-09-19 16:08:13.396287 | 2025-09-19 16:08:13.396414 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-19 16:08:13.808433 | orchestrator | ok 2025-09-19 16:08:13.829210 | 2025-09-19 16:08:13.829472 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-19 16:08:13.879636 | orchestrator | ok 2025-09-19 16:08:13.889810 | 2025-09-19 16:08:13.889943 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-19 16:08:14.207880 | orchestrator -> localhost | ok 2025-09-19 16:08:14.216913 | 2025-09-19 16:08:14.217018 | TASK [validate-host : Collect information about the host] 2025-09-19 16:08:15.465345 | orchestrator | ok 2025-09-19 16:08:15.482811 | 2025-09-19 16:08:15.482952 | TASK [validate-host : Sanitize hostname] 2025-09-19 16:08:15.549771 | orchestrator | ok 2025-09-19 16:08:15.558936 | 2025-09-19 16:08:15.559084 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-19 16:08:16.160166 | orchestrator -> localhost | changed 2025-09-19 16:08:16.174274 | 2025-09-19 16:08:16.174438 | TASK [validate-host : Collect information about zuul worker] 2025-09-19 16:08:16.605266 | orchestrator | ok 2025-09-19 16:08:16.615466 | 2025-09-19 16:08:16.615600 | TASK [validate-host : Write out all zuul information for each host] 2025-09-19 16:08:17.166724 | orchestrator -> localhost | changed 2025-09-19 16:08:17.186459 | 2025-09-19 16:08:17.186582 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-19 16:08:17.476710 | orchestrator | ok 2025-09-19 16:08:17.488489 | 2025-09-19 16:08:17.488651 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-19 16:08:56.166288 | orchestrator | changed: 2025-09-19 16:08:56.166729 | orchestrator | .d..t...... src/ 2025-09-19 16:08:56.166890 | orchestrator | .d..t...... src/github.com/ 2025-09-19 16:08:56.166963 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-19 16:08:56.167019 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-19 16:08:56.167071 | orchestrator | RedHat.yml 2025-09-19 16:08:56.187630 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-19 16:08:56.187650 | orchestrator | RedHat.yml 2025-09-19 16:08:56.187711 | orchestrator | = 1.53.0"... 2025-09-19 16:09:09.376072 | orchestrator | 16:09:09.375 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-09-19 16:09:09.562813 | orchestrator | 16:09:09.562 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-19 16:09:10.035112 | orchestrator | 16:09:10.034 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 16:09:10.526678 | orchestrator | 16:09:10.526 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-19 16:09:11.478834 | orchestrator | 16:09:11.478 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-19 16:09:11.550433 | orchestrator | 16:09:11.550 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-19 16:09:12.274507 | orchestrator | 16:09:12.274 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-19 16:09:12.274569 | orchestrator | 16:09:12.274 STDOUT terraform: Providers are signed by their developers. 2025-09-19 16:09:12.274576 | orchestrator | 16:09:12.274 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-19 16:09:12.274580 | orchestrator | 16:09:12.274 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-19 16:09:12.274585 | orchestrator | 16:09:12.274 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-19 16:09:12.274597 | orchestrator | 16:09:12.274 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-19 16:09:12.274607 | orchestrator | 16:09:12.274 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-19 16:09:12.274611 | orchestrator | 16:09:12.274 STDOUT terraform: you run "tofu init" in the future. 2025-09-19 16:09:12.274925 | orchestrator | 16:09:12.274 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-19 16:09:12.274933 | orchestrator | 16:09:12.274 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-19 16:09:12.274937 | orchestrator | 16:09:12.274 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-19 16:09:12.274941 | orchestrator | 16:09:12.274 STDOUT terraform: should now work. 2025-09-19 16:09:12.274945 | orchestrator | 16:09:12.274 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-19 16:09:12.274948 | orchestrator | 16:09:12.274 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-19 16:09:12.274953 | orchestrator | 16:09:12.274 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-19 16:09:12.390112 | orchestrator | 16:09:12.389 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 16:09:12.390242 | orchestrator | 16:09:12.389 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-19 16:09:12.620718 | orchestrator | 16:09:12.620 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-19 16:09:12.620790 | orchestrator | 16:09:12.620 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-19 16:09:12.620801 | orchestrator | 16:09:12.620 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-19 16:09:12.620807 | orchestrator | 16:09:12.620 STDOUT terraform: for this configuration. 2025-09-19 16:09:12.744061 | orchestrator | 16:09:12.743 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 16:09:12.744187 | orchestrator | 16:09:12.743 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-19 16:09:12.832729 | orchestrator | 16:09:12.831 STDOUT terraform: ci.auto.tfvars 2025-09-19 16:09:12.848063 | orchestrator | 16:09:12.847 STDOUT terraform: default_custom.tf 2025-09-19 16:09:12.989446 | orchestrator | 16:09:12.987 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-09-19 16:09:13.914434 | orchestrator | 16:09:13.914 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-19 16:09:14.446996 | orchestrator | 16:09:14.445 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-19 16:09:14.784328 | orchestrator | 16:09:14.784 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-19 16:09:14.784437 | orchestrator | 16:09:14.784 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-19 16:09:14.784445 | orchestrator | 16:09:14.784 STDOUT terraform:  + create 2025-09-19 16:09:14.784450 | orchestrator | 16:09:14.784 STDOUT terraform:  <= read (data resources) 2025-09-19 16:09:14.784455 | orchestrator | 16:09:14.784 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-19 16:09:14.784476 | orchestrator | 16:09:14.784 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-19 16:09:14.784505 | orchestrator | 16:09:14.784 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 16:09:14.784535 | orchestrator | 16:09:14.784 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-19 16:09:14.784565 | orchestrator | 16:09:14.784 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 16:09:14.784591 | orchestrator | 16:09:14.784 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 16:09:14.784628 | orchestrator | 16:09:14.784 STDOUT terraform:  + file = (known after apply) 2025-09-19 16:09:14.784659 | orchestrator | 16:09:14.784 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.784687 | orchestrator | 16:09:14.784 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.784708 | orchestrator | 16:09:14.784 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 16:09:14.784738 | orchestrator | 16:09:14.784 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 16:09:14.784755 | orchestrator | 16:09:14.784 STDOUT terraform:  + most_recent = true 2025-09-19 16:09:14.784783 | orchestrator | 16:09:14.784 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.784810 | orchestrator | 16:09:14.784 STDOUT terraform:  + protected = (known after apply) 2025-09-19 16:09:14.784838 | orchestrator | 16:09:14.784 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.784865 | orchestrator | 16:09:14.784 STDOUT terraform:  + schema = (known after apply) 2025-09-19 16:09:14.784893 | orchestrator | 16:09:14.784 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 16:09:14.784923 | orchestrator | 16:09:14.784 STDOUT terraform:  + tags = (known after apply) 2025-09-19 16:09:14.784950 | orchestrator | 16:09:14.784 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 16:09:14.784965 | orchestrator | 16:09:14.784 STDOUT terraform:  } 2025-09-19 16:09:14.785013 | orchestrator | 16:09:14.784 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-19 16:09:14.785040 | orchestrator | 16:09:14.785 STDOUT terraform:  # (config refers to values not yet known) 2025-09-19 16:09:14.785083 | orchestrator | 16:09:14.785 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-19 16:09:14.785111 | orchestrator | 16:09:14.785 STDOUT terraform:  + checksum = (known after apply) 2025-09-19 16:09:14.785139 | orchestrator | 16:09:14.785 STDOUT terraform:  + created_at = (known after apply) 2025-09-19 16:09:14.785166 | orchestrator | 16:09:14.785 STDOUT terraform:  + file = (known after apply) 2025-09-19 16:09:14.785200 | orchestrator | 16:09:14.785 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.785230 | orchestrator | 16:09:14.785 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.785257 | orchestrator | 16:09:14.785 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-19 16:09:14.785285 | orchestrator | 16:09:14.785 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-19 16:09:14.785307 | orchestrator | 16:09:14.785 STDOUT terraform:  + most_recent = true 2025-09-19 16:09:14.785331 | orchestrator | 16:09:14.785 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.785360 | orchestrator | 16:09:14.785 STDOUT terraform:  + protected = (known after apply) 2025-09-19 16:09:14.785386 | orchestrator | 16:09:14.785 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.785423 | orchestrator | 16:09:14.785 STDOUT terraform:  + schema = (known after apply) 2025-09-19 16:09:14.785451 | orchestrator | 16:09:14.785 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-19 16:09:14.785478 | orchestrator | 16:09:14.785 STDOUT terraform:  + tags = (known after apply) 2025-09-19 16:09:14.785505 | orchestrator | 16:09:14.785 STDOUT terraform:  + updated_at = (known after apply) 2025-09-19 16:09:14.785526 | orchestrator | 16:09:14.785 STDOUT terraform:  } 2025-09-19 16:09:14.785556 | orchestrator | 16:09:14.785 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-19 16:09:14.785585 | orchestrator | 16:09:14.785 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-19 16:09:14.785620 | orchestrator | 16:09:14.785 STDOUT terraform:  + content = (known after apply) 2025-09-19 16:09:14.785654 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 16:09:14.785687 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 16:09:14.785724 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 16:09:14.785760 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 16:09:14.785796 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 16:09:14.785834 | orchestrator | 16:09:14.785 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 16:09:14.785853 | orchestrator | 16:09:14.785 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 16:09:14.785875 | orchestrator | 16:09:14.785 STDOUT terraform:  + file_permission = "0644" 2025-09-19 16:09:14.785909 | orchestrator | 16:09:14.785 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-19 16:09:14.785945 | orchestrator | 16:09:14.785 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.785951 | orchestrator | 16:09:14.785 STDOUT terraform:  } 2025-09-19 16:09:14.785982 | orchestrator | 16:09:14.785 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-19 16:09:14.786004 | orchestrator | 16:09:14.785 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-19 16:09:14.786058 | orchestrator | 16:09:14.786 STDOUT terraform:  + content = (known after apply) 2025-09-19 16:09:14.786090 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 16:09:14.786127 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 16:09:14.786161 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 16:09:14.786200 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 16:09:14.786230 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 16:09:14.786264 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 16:09:14.786289 | orchestrator | 16:09:14.786 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 16:09:14.786320 | orchestrator | 16:09:14.786 STDOUT terraform:  + file_permission = "0644" 2025-09-19 16:09:14.786349 | orchestrator | 16:09:14.786 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-19 16:09:14.786387 | orchestrator | 16:09:14.786 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.786407 | orchestrator | 16:09:14.786 STDOUT terraform:  } 2025-09-19 16:09:14.786436 | orchestrator | 16:09:14.786 STDOUT terraform:  # local_file.inventory will be created 2025-09-19 16:09:14.786456 | orchestrator | 16:09:14.786 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-19 16:09:14.786490 | orchestrator | 16:09:14.786 STDOUT terraform:  + content = (known after apply) 2025-09-19 16:09:14.786522 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 16:09:14.786558 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 16:09:14.786592 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 16:09:14.786625 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 16:09:14.786659 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 16:09:14.786694 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 16:09:14.786718 | orchestrator | 16:09:14.786 STDOUT terraform:  + directory_permission = "0777" 2025-09-19 16:09:14.786740 | orchestrator | 16:09:14.786 STDOUT terraform:  + file_permission = "0644" 2025-09-19 16:09:14.786771 | orchestrator | 16:09:14.786 STDOUT terraform:  + filename = "inventory.ci" 2025-09-19 16:09:14.786812 | orchestrator | 16:09:14.786 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.786819 | orchestrator | 16:09:14.786 STDOUT terraform:  } 2025-09-19 16:09:14.786848 | orchestrator | 16:09:14.786 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-19 16:09:14.786878 | orchestrator | 16:09:14.786 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-19 16:09:14.786911 | orchestrator | 16:09:14.786 STDOUT terraform:  + content = (sensitive value) 2025-09-19 16:09:14.786945 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-19 16:09:14.786977 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-19 16:09:14.787015 | orchestrator | 16:09:14.786 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-19 16:09:14.787046 | orchestrator | 16:09:14.787 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-19 16:09:14.787082 | orchestrator | 16:09:14.787 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-19 16:09:14.787115 | orchestrator | 16:09:14.787 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-19 16:09:14.787136 | orchestrator | 16:09:14.787 STDOUT terraform:  + directory_permission = "0700" 2025-09-19 16:09:14.787171 | orchestrator | 16:09:14.787 STDOUT terraform:  + file_permission = "0600" 2025-09-19 16:09:14.787190 | orchestrator | 16:09:14.787 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-19 16:09:14.787227 | orchestrator | 16:09:14.787 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.787235 | orchestrator | 16:09:14.787 STDOUT terraform:  } 2025-09-19 16:09:14.787262 | orchestrator | 16:09:14.787 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-19 16:09:14.787294 | orchestrator | 16:09:14.787 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-19 16:09:14.787318 | orchestrator | 16:09:14.787 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.787335 | orchestrator | 16:09:14.787 STDOUT terraform:  } 2025-09-19 16:09:14.787386 | orchestrator | 16:09:14.787 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-19 16:09:14.787468 | orchestrator | 16:09:14.787 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-19 16:09:14.787501 | orchestrator | 16:09:14.787 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.787531 | orchestrator | 16:09:14.787 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.787560 | orchestrator | 16:09:14.787 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.787595 | orchestrator | 16:09:14.787 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.787631 | orchestrator | 16:09:14.787 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.788368 | orchestrator | 16:09:14.787 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-19 16:09:14.788379 | orchestrator | 16:09:14.787 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.788383 | orchestrator | 16:09:14.787 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.788387 | orchestrator | 16:09:14.787 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.788430 | orchestrator | 16:09:14.787 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.788435 | orchestrator | 16:09:14.787 STDOUT terraform:  } 2025-09-19 16:09:14.788439 | orchestrator | 16:09:14.787 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-19 16:09:14.788443 | orchestrator | 16:09:14.787 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.788447 | orchestrator | 16:09:14.787 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.788450 | orchestrator | 16:09:14.787 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.788454 | orchestrator | 16:09:14.787 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.788458 | orchestrator | 16:09:14.787 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.788462 | orchestrator | 16:09:14.787 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.788466 | orchestrator | 16:09:14.787 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-19 16:09:14.788470 | orchestrator | 16:09:14.788 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.788474 | orchestrator | 16:09:14.788 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.788477 | orchestrator | 16:09:14.788 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.788481 | orchestrator | 16:09:14.788 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.788485 | orchestrator | 16:09:14.788 STDOUT terraform:  } 2025-09-19 16:09:14.788489 | orchestrator | 16:09:14.788 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-19 16:09:14.788493 | orchestrator | 16:09:14.788 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.788496 | orchestrator | 16:09:14.788 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.788506 | orchestrator | 16:09:14.788 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.788510 | orchestrator | 16:09:14.788 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.788514 | orchestrator | 16:09:14.788 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.788518 | orchestrator | 16:09:14.788 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.789282 | orchestrator | 16:09:14.788 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-19 16:09:14.789453 | orchestrator | 16:09:14.789 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.789464 | orchestrator | 16:09:14.789 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.789559 | orchestrator | 16:09:14.789 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.789570 | orchestrator | 16:09:14.789 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.789621 | orchestrator | 16:09:14.789 STDOUT terraform:  } 2025-09-19 16:09:14.789752 | orchestrator | 16:09:14.789 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-19 16:09:14.789866 | orchestrator | 16:09:14.789 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.789902 | orchestrator | 16:09:14.789 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.790041 | orchestrator | 16:09:14.789 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.790165 | orchestrator | 16:09:14.790 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.790295 | orchestrator | 16:09:14.790 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.790332 | orchestrator | 16:09:14.790 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.790474 | orchestrator | 16:09:14.790 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-19 16:09:14.790605 | orchestrator | 16:09:14.790 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.790630 | orchestrator | 16:09:14.790 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.790657 | orchestrator | 16:09:14.790 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.790776 | orchestrator | 16:09:14.790 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.790783 | orchestrator | 16:09:14.790 STDOUT terraform:  } 2025-09-19 16:09:14.790926 | orchestrator | 16:09:14.790 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-19 16:09:14.791064 | orchestrator | 16:09:14.790 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.791101 | orchestrator | 16:09:14.791 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.791216 | orchestrator | 16:09:14.791 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.791268 | orchestrator | 16:09:14.791 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.791377 | orchestrator | 16:09:14.791 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.791425 | orchestrator | 16:09:14.791 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.791568 | orchestrator | 16:09:14.791 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-19 16:09:14.791694 | orchestrator | 16:09:14.791 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.791725 | orchestrator | 16:09:14.791 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.791833 | orchestrator | 16:09:14.791 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.791885 | orchestrator | 16:09:14.791 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.791891 | orchestrator | 16:09:14.791 STDOUT terraform:  } 2025-09-19 16:09:14.792023 | orchestrator | 16:09:14.791 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-19 16:09:14.792164 | orchestrator | 16:09:14.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.792200 | orchestrator | 16:09:14.792 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.792344 | orchestrator | 16:09:14.792 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.792378 | orchestrator | 16:09:14.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.792417 | orchestrator | 16:09:14.792 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.792701 | orchestrator | 16:09:14.792 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.792770 | orchestrator | 16:09:14.792 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-19 16:09:14.792776 | orchestrator | 16:09:14.792 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.792781 | orchestrator | 16:09:14.792 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.792808 | orchestrator | 16:09:14.792 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.792848 | orchestrator | 16:09:14.792 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.792856 | orchestrator | 16:09:14.792 STDOUT terraform:  } 2025-09-19 16:09:14.792886 | orchestrator | 16:09:14.792 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-19 16:09:14.792951 | orchestrator | 16:09:14.792 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-19 16:09:14.792961 | orchestrator | 16:09:14.792 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.792996 | orchestrator | 16:09:14.792 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.793022 | orchestrator | 16:09:14.792 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.793765 | orchestrator | 16:09:14.793 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.793806 | orchestrator | 16:09:14.793 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.793883 | orchestrator | 16:09:14.793 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-19 16:09:14.793889 | orchestrator | 16:09:14.793 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.793894 | orchestrator | 16:09:14.793 STDOUT terraform:  + size = 80 2025-09-19 16:09:14.793908 | orchestrator | 16:09:14.793 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.793935 | orchestrator | 16:09:14.793 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.793941 | orchestrator | 16:09:14.793 STDOUT terraform:  } 2025-09-19 16:09:14.794009 | orchestrator | 16:09:14.793 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-19 16:09:14.794039 | orchestrator | 16:09:14.793 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.794083 | orchestrator | 16:09:14.794 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.794104 | orchestrator | 16:09:14.794 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.794137 | orchestrator | 16:09:14.794 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.794172 | orchestrator | 16:09:14.794 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.794231 | orchestrator | 16:09:14.794 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-19 16:09:14.794251 | orchestrator | 16:09:14.794 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.794269 | orchestrator | 16:09:14.794 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.794293 | orchestrator | 16:09:14.794 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.794332 | orchestrator | 16:09:14.794 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.794337 | orchestrator | 16:09:14.794 STDOUT terraform:  } 2025-09-19 16:09:14.794402 | orchestrator | 16:09:14.794 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-19 16:09:14.794423 | orchestrator | 16:09:14.794 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.794456 | orchestrator | 16:09:14.794 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.794479 | orchestrator | 16:09:14.794 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.794515 | orchestrator | 16:09:14.794 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.794587 | orchestrator | 16:09:14.794 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.794595 | orchestrator | 16:09:14.794 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-19 16:09:14.794614 | orchestrator | 16:09:14.794 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.794656 | orchestrator | 16:09:14.794 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.794664 | orchestrator | 16:09:14.794 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.794697 | orchestrator | 16:09:14.794 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.794702 | orchestrator | 16:09:14.794 STDOUT terraform:  } 2025-09-19 16:09:14.794734 | orchestrator | 16:09:14.794 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-19 16:09:14.794777 | orchestrator | 16:09:14.794 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.794812 | orchestrator | 16:09:14.794 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.794835 | orchestrator | 16:09:14.794 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.794880 | orchestrator | 16:09:14.794 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.794904 | orchestrator | 16:09:14.794 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.794941 | orchestrator | 16:09:14.794 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-19 16:09:14.794991 | orchestrator | 16:09:14.794 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.794997 | orchestrator | 16:09:14.794 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.795039 | orchestrator | 16:09:14.794 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.795044 | orchestrator | 16:09:14.795 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.795049 | orchestrator | 16:09:14.795 STDOUT terraform:  } 2025-09-19 16:09:14.796006 | orchestrator | 16:09:14.795 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-19 16:09:14.807123 | orchestrator | 16:09:14.795 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.807208 | orchestrator | 16:09:14.807 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.807223 | orchestrator | 16:09:14.807 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.807234 | orchestrator | 16:09:14.807 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.807268 | orchestrator | 16:09:14.807 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.807311 | orchestrator | 16:09:14.807 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-19 16:09:14.807334 | orchestrator | 16:09:14.807 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.807354 | orchestrator | 16:09:14.807 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.807375 | orchestrator | 16:09:14.807 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.807416 | orchestrator | 16:09:14.807 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.807434 | orchestrator | 16:09:14.807 STDOUT terraform:  } 2025-09-19 16:09:14.807457 | orchestrator | 16:09:14.807 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-19 16:09:14.807505 | orchestrator | 16:09:14.807 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.807536 | orchestrator | 16:09:14.807 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.807563 | orchestrator | 16:09:14.807 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.807608 | orchestrator | 16:09:14.807 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.807624 | orchestrator | 16:09:14.807 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.807669 | orchestrator | 16:09:14.807 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-19 16:09:14.807705 | orchestrator | 16:09:14.807 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.807740 | orchestrator | 16:09:14.807 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.807751 | orchestrator | 16:09:14.807 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.807764 | orchestrator | 16:09:14.807 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.807774 | orchestrator | 16:09:14.807 STDOUT terraform:  } 2025-09-19 16:09:14.807875 | orchestrator | 16:09:14.807 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-19 16:09:14.807921 | orchestrator | 16:09:14.807 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.807936 | orchestrator | 16:09:14.807 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.807968 | orchestrator | 16:09:14.807 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.808008 | orchestrator | 16:09:14.807 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.808041 | orchestrator | 16:09:14.807 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.808088 | orchestrator | 16:09:14.808 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-19 16:09:14.808103 | orchestrator | 16:09:14.808 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.808134 | orchestrator | 16:09:14.808 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.808148 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.808161 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.808174 | orchestrator | 16:09:14.808 STDOUT terraform:  } 2025-09-19 16:09:14.808215 | orchestrator | 16:09:14.808 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-19 16:09:14.808257 | orchestrator | 16:09:14.808 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.808297 | orchestrator | 16:09:14.808 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.808320 | orchestrator | 16:09:14.808 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.808340 | orchestrator | 16:09:14.808 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.808360 | orchestrator | 16:09:14.808 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.808551 | orchestrator | 16:09:14.808 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-19 16:09:14.808580 | orchestrator | 16:09:14.808 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.808596 | orchestrator | 16:09:14.808 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.808614 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.808630 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.808645 | orchestrator | 16:09:14.808 STDOUT terraform:  } 2025-09-19 16:09:14.808668 | orchestrator | 16:09:14.808 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-19 16:09:14.808685 | orchestrator | 16:09:14.808 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.808716 | orchestrator | 16:09:14.808 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.808732 | orchestrator | 16:09:14.808 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.808753 | orchestrator | 16:09:14.808 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.808770 | orchestrator | 16:09:14.808 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.808786 | orchestrator | 16:09:14.808 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-19 16:09:14.808806 | orchestrator | 16:09:14.808 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.808823 | orchestrator | 16:09:14.808 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.808840 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.808861 | orchestrator | 16:09:14.808 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.808877 | orchestrator | 16:09:14.808 STDOUT terraform:  } 2025-09-19 16:09:14.808898 | orchestrator | 16:09:14.808 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-19 16:09:14.808915 | orchestrator | 16:09:14.808 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-19 16:09:14.809658 | orchestrator | 16:09:14.808 STDOUT terraform:  + attachment = (known after apply) 2025-09-19 16:09:14.809772 | orchestrator | 16:09:14.808 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.809785 | orchestrator | 16:09:14.808 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.809792 | orchestrator | 16:09:14.808 STDOUT terraform:  + metadata = (known after apply) 2025-09-19 16:09:14.809799 | orchestrator | 16:09:14.809 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-19 16:09:14.809806 | orchestrator | 16:09:14.809 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.809812 | orchestrator | 16:09:14.809 STDOUT terraform:  + size = 20 2025-09-19 16:09:14.809819 | orchestrator | 16:09:14.809 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-19 16:09:14.809825 | orchestrator | 16:09:14.809 STDOUT terraform:  + volume_type = "ssd" 2025-09-19 16:09:14.809832 | orchestrator | 16:09:14.809 STDOUT terraform:  } 2025-09-19 16:09:14.809849 | orchestrator | 16:09:14.809 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-19 16:09:14.809857 | orchestrator | 16:09:14.809 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-19 16:09:14.809863 | orchestrator | 16:09:14.809 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.809870 | orchestrator | 16:09:14.809 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.809876 | orchestrator | 16:09:14.809 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.809882 | orchestrator | 16:09:14.809 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.809888 | orchestrator | 16:09:14.809 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.809907 | orchestrator | 16:09:14.809 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.809914 | orchestrator | 16:09:14.809 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.809920 | orchestrator | 16:09:14.809 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.809926 | orchestrator | 16:09:14.809 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-19 16:09:14.809932 | orchestrator | 16:09:14.809 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.809939 | orchestrator | 16:09:14.809 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.809945 | orchestrator | 16:09:14.809 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.809951 | orchestrator | 16:09:14.809 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.809958 | orchestrator | 16:09:14.809 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.809972 | orchestrator | 16:09:14.809 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.809978 | orchestrator | 16:09:14.809 STDOUT terraform:  + name = "testbed-manager" 2025-09-19 16:09:14.809985 | orchestrator | 16:09:14.809 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.809991 | orchestrator | 16:09:14.809 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.809997 | orchestrator | 16:09:14.809 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.810003 | orchestrator | 16:09:14.809 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.810009 | orchestrator | 16:09:14.809 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.810036 | orchestrator | 16:09:14.809 STDOUT terraform:  + user_data = (sensitive value) 2025-09-19 16:09:14.810043 | orchestrator | 16:09:14.809 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.810049 | orchestrator | 16:09:14.809 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.810055 | orchestrator | 16:09:14.809 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.810061 | orchestrator | 16:09:14.809 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.810067 | orchestrator | 16:09:14.809 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.810073 | orchestrator | 16:09:14.809 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.810080 | orchestrator | 16:09:14.809 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.810089 | orchestrator | 16:09:14.809 STDOUT terraform:  } 2025-09-19 16:09:14.810095 | orchestrator | 16:09:14.809 STDOUT terraform:  + network { 2025-09-19 16:09:14.810101 | orchestrator | 16:09:14.809 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.810107 | orchestrator | 16:09:14.809 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.810114 | orchestrator | 16:09:14.810 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.810122 | orchestrator | 16:09:14.810 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.810134 | orchestrator | 16:09:14.810 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.810143 | orchestrator | 16:09:14.810 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.810183 | orchestrator | 16:09:14.810 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.810193 | orchestrator | 16:09:14.810 STDOUT terraform:  } 2025-09-19 16:09:14.810202 | orchestrator | 16:09:14.810 STDOUT terraform:  } 2025-09-19 16:09:14.810250 | orchestrator | 16:09:14.810 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-19 16:09:14.810289 | orchestrator | 16:09:14.810 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.810323 | orchestrator | 16:09:14.810 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.810370 | orchestrator | 16:09:14.810 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.810382 | orchestrator | 16:09:14.810 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.810440 | orchestrator | 16:09:14.810 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.810467 | orchestrator | 16:09:14.810 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.810476 | orchestrator | 16:09:14.810 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.810518 | orchestrator | 16:09:14.810 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.810556 | orchestrator | 16:09:14.810 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.810581 | orchestrator | 16:09:14.810 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.810604 | orchestrator | 16:09:14.810 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.810642 | orchestrator | 16:09:14.810 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.810673 | orchestrator | 16:09:14.810 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.810721 | orchestrator | 16:09:14.810 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.810730 | orchestrator | 16:09:14.810 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.810767 | orchestrator | 16:09:14.810 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.810796 | orchestrator | 16:09:14.810 STDOUT terraform:  + name = "testbed-node-0" 2025-09-19 16:09:14.810831 | orchestrator | 16:09:14.810 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.810857 | orchestrator | 16:09:14.810 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.810891 | orchestrator | 16:09:14.810 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.810925 | orchestrator | 16:09:14.810 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.810952 | orchestrator | 16:09:14.810 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.810997 | orchestrator | 16:09:14.810 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.811019 | orchestrator | 16:09:14.810 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.811034 | orchestrator | 16:09:14.811 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.811060 | orchestrator | 16:09:14.811 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.811085 | orchestrator | 16:09:14.811 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.811110 | orchestrator | 16:09:14.811 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.811132 | orchestrator | 16:09:14.811 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.811172 | orchestrator | 16:09:14.811 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.811182 | orchestrator | 16:09:14.811 STDOUT terraform:  } 2025-09-19 16:09:14.811191 | orchestrator | 16:09:14.811 STDOUT terraform:  + network { 2025-09-19 16:09:14.811224 | orchestrator | 16:09:14.811 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.811233 | orchestrator | 16:09:14.811 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.811267 | orchestrator | 16:09:14.811 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.811297 | orchestrator | 16:09:14.811 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.811335 | orchestrator | 16:09:14.811 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.811364 | orchestrator | 16:09:14.811 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.811386 | orchestrator | 16:09:14.811 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.811414 | orchestrator | 16:09:14.811 STDOUT terraform:  } 2025-09-19 16:09:14.811578 | orchestrator | 16:09:14.811 STDOUT terraform:  } 2025-09-19 16:09:14.811648 | orchestrator | 16:09:14.811 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-19 16:09:14.811655 | orchestrator | 16:09:14.811 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.811666 | orchestrator | 16:09:14.811 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.811671 | orchestrator | 16:09:14.811 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.811675 | orchestrator | 16:09:14.811 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.811679 | orchestrator | 16:09:14.811 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.811685 | orchestrator | 16:09:14.811 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.811716 | orchestrator | 16:09:14.811 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.811744 | orchestrator | 16:09:14.811 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.811782 | orchestrator | 16:09:14.811 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.811801 | orchestrator | 16:09:14.811 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.811824 | orchestrator | 16:09:14.811 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.811873 | orchestrator | 16:09:14.811 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.811890 | orchestrator | 16:09:14.811 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.811922 | orchestrator | 16:09:14.811 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.811955 | orchestrator | 16:09:14.811 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.811985 | orchestrator | 16:09:14.811 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.812014 | orchestrator | 16:09:14.811 STDOUT terraform:  + name = "testbed-node-1" 2025-09-19 16:09:14.812036 | orchestrator | 16:09:14.812 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.812080 | orchestrator | 16:09:14.812 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.812104 | orchestrator | 16:09:14.812 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.812125 | orchestrator | 16:09:14.812 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.812171 | orchestrator | 16:09:14.812 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.812209 | orchestrator | 16:09:14.812 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.812217 | orchestrator | 16:09:14.812 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.812243 | orchestrator | 16:09:14.812 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.812275 | orchestrator | 16:09:14.812 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.812298 | orchestrator | 16:09:14.812 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.812324 | orchestrator | 16:09:14.812 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.812372 | orchestrator | 16:09:14.812 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.812421 | orchestrator | 16:09:14.812 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.812427 | orchestrator | 16:09:14.812 STDOUT terraform:  } 2025-09-19 16:09:14.812431 | orchestrator | 16:09:14.812 STDOUT terraform:  + network { 2025-09-19 16:09:14.812450 | orchestrator | 16:09:14.812 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.812471 | orchestrator | 16:09:14.812 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.812497 | orchestrator | 16:09:14.812 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.812527 | orchestrator | 16:09:14.812 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.812555 | orchestrator | 16:09:14.812 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.812586 | orchestrator | 16:09:14.812 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.812627 | orchestrator | 16:09:14.812 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.812633 | orchestrator | 16:09:14.812 STDOUT terraform:  } 2025-09-19 16:09:14.812638 | orchestrator | 16:09:14.812 STDOUT terraform:  } 2025-09-19 16:09:14.812676 | orchestrator | 16:09:14.812 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-19 16:09:14.812723 | orchestrator | 16:09:14.812 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.812748 | orchestrator | 16:09:14.812 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.812783 | orchestrator | 16:09:14.812 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.812823 | orchestrator | 16:09:14.812 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.812853 | orchestrator | 16:09:14.812 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.812877 | orchestrator | 16:09:14.812 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.812898 | orchestrator | 16:09:14.812 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.812932 | orchestrator | 16:09:14.812 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.812964 | orchestrator | 16:09:14.812 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.813000 | orchestrator | 16:09:14.812 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.813007 | orchestrator | 16:09:14.812 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.813044 | orchestrator | 16:09:14.813 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.813077 | orchestrator | 16:09:14.813 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.813111 | orchestrator | 16:09:14.813 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.813144 | orchestrator | 16:09:14.813 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.813187 | orchestrator | 16:09:14.813 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.813195 | orchestrator | 16:09:14.813 STDOUT terraform:  + name = "testbed-node-2" 2025-09-19 16:09:14.813219 | orchestrator | 16:09:14.813 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.813253 | orchestrator | 16:09:14.813 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.813288 | orchestrator | 16:09:14.813 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.813307 | orchestrator | 16:09:14.813 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.813342 | orchestrator | 16:09:14.813 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.813401 | orchestrator | 16:09:14.813 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.813425 | orchestrator | 16:09:14.813 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.813462 | orchestrator | 16:09:14.813 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.813470 | orchestrator | 16:09:14.813 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.813506 | orchestrator | 16:09:14.813 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.813530 | orchestrator | 16:09:14.813 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.813563 | orchestrator | 16:09:14.813 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.813596 | orchestrator | 16:09:14.813 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.813608 | orchestrator | 16:09:14.813 STDOUT terraform:  } 2025-09-19 16:09:14.813613 | orchestrator | 16:09:14.813 STDOUT terraform:  + network { 2025-09-19 16:09:14.813634 | orchestrator | 16:09:14.813 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.813659 | orchestrator | 16:09:14.813 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.813689 | orchestrator | 16:09:14.813 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.813719 | orchestrator | 16:09:14.813 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.813753 | orchestrator | 16:09:14.813 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.813781 | orchestrator | 16:09:14.813 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.813810 | orchestrator | 16:09:14.813 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.813817 | orchestrator | 16:09:14.813 STDOUT terraform:  } 2025-09-19 16:09:14.813846 | orchestrator | 16:09:14.813 STDOUT terraform:  } 2025-09-19 16:09:14.813880 | orchestrator | 16:09:14.813 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-19 16:09:14.813920 | orchestrator | 16:09:14.813 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.813954 | orchestrator | 16:09:14.813 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.813988 | orchestrator | 16:09:14.813 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.814047 | orchestrator | 16:09:14.813 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.814076 | orchestrator | 16:09:14.814 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.814117 | orchestrator | 16:09:14.814 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.814124 | orchestrator | 16:09:14.814 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.814152 | orchestrator | 16:09:14.814 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.814186 | orchestrator | 16:09:14.814 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.814218 | orchestrator | 16:09:14.814 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.814237 | orchestrator | 16:09:14.814 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.817313 | orchestrator | 16:09:14.814 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.817353 | orchestrator | 16:09:14.814 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.817360 | orchestrator | 16:09:14.814 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.817365 | orchestrator | 16:09:14.814 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.817370 | orchestrator | 16:09:14.814 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.817375 | orchestrator | 16:09:14.814 STDOUT terraform:  + name = "testbed-node-3" 2025-09-19 16:09:14.817379 | orchestrator | 16:09:14.814 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.817432 | orchestrator | 16:09:14.814 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.817438 | orchestrator | 16:09:14.814 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.817442 | orchestrator | 16:09:14.814 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.817447 | orchestrator | 16:09:14.814 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.817451 | orchestrator | 16:09:14.814 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.817457 | orchestrator | 16:09:14.814 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.817461 | orchestrator | 16:09:14.814 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.817466 | orchestrator | 16:09:14.814 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.817470 | orchestrator | 16:09:14.814 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.817474 | orchestrator | 16:09:14.814 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.817479 | orchestrator | 16:09:14.814 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.817483 | orchestrator | 16:09:14.814 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817488 | orchestrator | 16:09:14.814 STDOUT terraform:  } 2025-09-19 16:09:14.817492 | orchestrator | 16:09:14.814 STDOUT terraform:  + network { 2025-09-19 16:09:14.817497 | orchestrator | 16:09:14.814 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.817501 | orchestrator | 16:09:14.814 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.817505 | orchestrator | 16:09:14.814 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.817510 | orchestrator | 16:09:14.814 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.817514 | orchestrator | 16:09:14.814 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.817518 | orchestrator | 16:09:14.814 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.817523 | orchestrator | 16:09:14.814 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817527 | orchestrator | 16:09:14.814 STDOUT terraform:  } 2025-09-19 16:09:14.817532 | orchestrator | 16:09:14.814 STDOUT terraform:  } 2025-09-19 16:09:14.817536 | orchestrator | 16:09:14.814 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-19 16:09:14.817539 | orchestrator | 16:09:14.814 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.817543 | orchestrator | 16:09:14.815 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.817547 | orchestrator | 16:09:14.815 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.817551 | orchestrator | 16:09:14.815 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.817554 | orchestrator | 16:09:14.815 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.817558 | orchestrator | 16:09:14.815 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.817565 | orchestrator | 16:09:14.815 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.817583 | orchestrator | 16:09:14.815 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.817587 | orchestrator | 16:09:14.815 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.817591 | orchestrator | 16:09:14.815 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.817595 | orchestrator | 16:09:14.815 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.817598 | orchestrator | 16:09:14.815 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.817602 | orchestrator | 16:09:14.815 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.817606 | orchestrator | 16:09:14.815 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.817612 | orchestrator | 16:09:14.815 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.817615 | orchestrator | 16:09:14.815 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.817619 | orchestrator | 16:09:14.815 STDOUT terraform:  + name = "testbed-node-4" 2025-09-19 16:09:14.817623 | orchestrator | 16:09:14.815 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.817627 | orchestrator | 16:09:14.815 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.817630 | orchestrator | 16:09:14.815 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.817634 | orchestrator | 16:09:14.815 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.817638 | orchestrator | 16:09:14.815 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.817642 | orchestrator | 16:09:14.815 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.817645 | orchestrator | 16:09:14.815 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.817649 | orchestrator | 16:09:14.815 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.817653 | orchestrator | 16:09:14.815 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.817656 | orchestrator | 16:09:14.815 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.817660 | orchestrator | 16:09:14.815 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.817664 | orchestrator | 16:09:14.815 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.817667 | orchestrator | 16:09:14.815 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817671 | orchestrator | 16:09:14.815 STDOUT terraform:  } 2025-09-19 16:09:14.817675 | orchestrator | 16:09:14.815 STDOUT terraform:  + network { 2025-09-19 16:09:14.817679 | orchestrator | 16:09:14.815 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.817682 | orchestrator | 16:09:14.815 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.817686 | orchestrator | 16:09:14.815 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.817690 | orchestrator | 16:09:14.815 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.817696 | orchestrator | 16:09:14.815 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.817700 | orchestrator | 16:09:14.815 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.817704 | orchestrator | 16:09:14.815 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817707 | orchestrator | 16:09:14.815 STDOUT terraform:  } 2025-09-19 16:09:14.817711 | orchestrator | 16:09:14.815 STDOUT terraform:  } 2025-09-19 16:09:14.817715 | orchestrator | 16:09:14.815 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-19 16:09:14.817719 | orchestrator | 16:09:14.816 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-19 16:09:14.817723 | orchestrator | 16:09:14.816 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-19 16:09:14.817726 | orchestrator | 16:09:14.816 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-19 16:09:14.817732 | orchestrator | 16:09:14.816 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-19 16:09:14.817736 | orchestrator | 16:09:14.816 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.817740 | orchestrator | 16:09:14.816 STDOUT terraform:  + availability_zone = "nova" 2025-09-19 16:09:14.817744 | orchestrator | 16:09:14.816 STDOUT terraform:  + config_drive = true 2025-09-19 16:09:14.817747 | orchestrator | 16:09:14.816 STDOUT terraform:  + created = (known after apply) 2025-09-19 16:09:14.817751 | orchestrator | 16:09:14.816 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-19 16:09:14.817755 | orchestrator | 16:09:14.816 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-19 16:09:14.817758 | orchestrator | 16:09:14.816 STDOUT terraform:  + force_delete = false 2025-09-19 16:09:14.817762 | orchestrator | 16:09:14.816 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-19 16:09:14.817766 | orchestrator | 16:09:14.816 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.817770 | orchestrator | 16:09:14.816 STDOUT terraform:  + image_id = (known after apply) 2025-09-19 16:09:14.817773 | orchestrator | 16:09:14.816 STDOUT terraform:  + image_name = (known after apply) 2025-09-19 16:09:14.817777 | orchestrator | 16:09:14.816 STDOUT terraform:  + key_pair = "testbed" 2025-09-19 16:09:14.817781 | orchestrator | 16:09:14.816 STDOUT terraform:  + name = "testbed-node-5" 2025-09-19 16:09:14.817784 | orchestrator | 16:09:14.816 STDOUT terraform:  + power_state = "active" 2025-09-19 16:09:14.817788 | orchestrator | 16:09:14.816 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.817792 | orchestrator | 16:09:14.816 STDOUT terraform:  + security_groups = (known after apply) 2025-09-19 16:09:14.817796 | orchestrator | 16:09:14.816 STDOUT terraform:  + stop_before_destroy = false 2025-09-19 16:09:14.817799 | orchestrator | 16:09:14.816 STDOUT terraform:  + updated = (known after apply) 2025-09-19 16:09:14.817803 | orchestrator | 16:09:14.816 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-19 16:09:14.817807 | orchestrator | 16:09:14.816 STDOUT terraform:  + block_device { 2025-09-19 16:09:14.817813 | orchestrator | 16:09:14.816 STDOUT terraform:  + boot_index = 0 2025-09-19 16:09:14.817817 | orchestrator | 16:09:14.816 STDOUT terraform:  + delete_on_termination = false 2025-09-19 16:09:14.817821 | orchestrator | 16:09:14.816 STDOUT terraform:  + destination_type = "volume" 2025-09-19 16:09:14.817825 | orchestrator | 16:09:14.816 STDOUT terraform:  + multiattach = false 2025-09-19 16:09:14.817828 | orchestrator | 16:09:14.816 STDOUT terraform:  + source_type = "volume" 2025-09-19 16:09:14.817832 | orchestrator | 16:09:14.816 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817836 | orchestrator | 16:09:14.816 STDOUT terraform:  } 2025-09-19 16:09:14.817839 | orchestrator | 16:09:14.816 STDOUT terraform:  + network { 2025-09-19 16:09:14.817843 | orchestrator | 16:09:14.816 STDOUT terraform:  + access_network = false 2025-09-19 16:09:14.817847 | orchestrator | 16:09:14.816 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-19 16:09:14.817851 | orchestrator | 16:09:14.816 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-19 16:09:14.817854 | orchestrator | 16:09:14.816 STDOUT terraform:  + mac = (known after apply) 2025-09-19 16:09:14.817858 | orchestrator | 16:09:14.816 STDOUT terraform:  + name = (known after apply) 2025-09-19 16:09:14.817864 | orchestrator | 16:09:14.816 STDOUT terraform:  + port = (known after apply) 2025-09-19 16:09:14.817868 | orchestrator | 16:09:14.816 STDOUT terraform:  + uuid = (known after apply) 2025-09-19 16:09:14.817872 | orchestrator | 16:09:14.817 STDOUT terraform:  } 2025-09-19 16:09:14.817876 | orchestrator | 16:09:14.817 STDOUT terraform:  } 2025-09-19 16:09:14.817882 | orchestrator | 16:09:14.817 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-19 16:09:14.817886 | orchestrator | 16:09:14.817 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-19 16:09:14.817889 | orchestrator | 16:09:14.817 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-19 16:09:14.817893 | orchestrator | 16:09:14.817 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.817897 | orchestrator | 16:09:14.817 STDOUT terraform:  + name = "testbed" 2025-09-19 16:09:14.817900 | orchestrator | 16:09:14.817 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 16:09:14.817904 | orchestrator | 16:09:14.817 STDOUT terraform:  + public_key = (known after apply) 2025-09-19 16:09:14.817908 | orchestrator | 16:09:14.817 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.817914 | orchestrator | 16:09:14.817 STDOUT terraform:  + user_id = (known after apply) 2025-09-19 16:09:14.817917 | orchestrator | 16:09:14.817 STDOUT terraform:  } 2025-09-19 16:09:14.822236 | orchestrator | 16:09:14.817 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-19 16:09:14.826962 | orchestrator | 16:09:14.822 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.826979 | orchestrator | 16:09:14.826 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.827023 | orchestrator | 16:09:14.826 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.827043 | orchestrator | 16:09:14.827 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.827070 | orchestrator | 16:09:14.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.827101 | orchestrator | 16:09:14.827 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.827108 | orchestrator | 16:09:14.827 STDOUT terraform:  } 2025-09-19 16:09:14.827168 | orchestrator | 16:09:14.827 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-19 16:09:14.827211 | orchestrator | 16:09:14.827 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.827239 | orchestrator | 16:09:14.827 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.827267 | orchestrator | 16:09:14.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.827295 | orchestrator | 16:09:14.827 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.827322 | orchestrator | 16:09:14.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.827351 | orchestrator | 16:09:14.827 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.827357 | orchestrator | 16:09:14.827 STDOUT terraform:  } 2025-09-19 16:09:14.827468 | orchestrator | 16:09:14.827 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-19 16:09:14.827508 | orchestrator | 16:09:14.827 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.827535 | orchestrator | 16:09:14.827 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.827563 | orchestrator | 16:09:14.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.827591 | orchestrator | 16:09:14.827 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.827621 | orchestrator | 16:09:14.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.827650 | orchestrator | 16:09:14.827 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.827657 | orchestrator | 16:09:14.827 STDOUT terraform:  } 2025-09-19 16:09:14.827711 | orchestrator | 16:09:14.827 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-19 16:09:14.827758 | orchestrator | 16:09:14.827 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.827787 | orchestrator | 16:09:14.827 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829316 | orchestrator | 16:09:14.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829349 | orchestrator | 16:09:14.827 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829354 | orchestrator | 16:09:14.827 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829359 | orchestrator | 16:09:14.827 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829363 | orchestrator | 16:09:14.827 STDOUT terraform:  } 2025-09-19 16:09:14.829368 | orchestrator | 16:09:14.827 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-19 16:09:14.829383 | orchestrator | 16:09:14.827 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.829388 | orchestrator | 16:09:14.827 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829410 | orchestrator | 16:09:14.827 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829414 | orchestrator | 16:09:14.828 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829419 | orchestrator | 16:09:14.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829423 | orchestrator | 16:09:14.828 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829428 | orchestrator | 16:09:14.828 STDOUT terraform:  } 2025-09-19 16:09:14.829432 | orchestrator | 16:09:14.828 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-19 16:09:14.829436 | orchestrator | 16:09:14.828 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.829441 | orchestrator | 16:09:14.828 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829445 | orchestrator | 16:09:14.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829449 | orchestrator | 16:09:14.828 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829454 | orchestrator | 16:09:14.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829458 | orchestrator | 16:09:14.828 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829462 | orchestrator | 16:09:14.828 STDOUT terraform:  } 2025-09-19 16:09:14.829467 | orchestrator | 16:09:14.828 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-19 16:09:14.829471 | orchestrator | 16:09:14.828 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.829475 | orchestrator | 16:09:14.828 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829480 | orchestrator | 16:09:14.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829484 | orchestrator | 16:09:14.828 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829488 | orchestrator | 16:09:14.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829493 | orchestrator | 16:09:14.828 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829497 | orchestrator | 16:09:14.828 STDOUT terraform:  } 2025-09-19 16:09:14.829501 | orchestrator | 16:09:14.828 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-19 16:09:14.829506 | orchestrator | 16:09:14.828 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.829510 | orchestrator | 16:09:14.828 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829514 | orchestrator | 16:09:14.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829518 | orchestrator | 16:09:14.828 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829523 | orchestrator | 16:09:14.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829530 | orchestrator | 16:09:14.828 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829535 | orchestrator | 16:09:14.828 STDOUT terraform:  } 2025-09-19 16:09:14.829550 | orchestrator | 16:09:14.828 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-19 16:09:14.829555 | orchestrator | 16:09:14.828 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-19 16:09:14.829563 | orchestrator | 16:09:14.828 STDOUT terraform:  + device = (known after apply) 2025-09-19 16:09:14.829568 | orchestrator | 16:09:14.828 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829572 | orchestrator | 16:09:14.828 STDOUT terraform:  + instance_id = (known after apply) 2025-09-19 16:09:14.829576 | orchestrator | 16:09:14.828 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829581 | orchestrator | 16:09:14.828 STDOUT terraform:  + volume_id = (known after apply) 2025-09-19 16:09:14.829585 | orchestrator | 16:09:14.828 STDOUT terraform:  } 2025-09-19 16:09:14.829592 | orchestrator | 16:09:14.828 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-19 16:09:14.829597 | orchestrator | 16:09:14.829 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-19 16:09:14.829602 | orchestrator | 16:09:14.829 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 16:09:14.829606 | orchestrator | 16:09:14.829 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-19 16:09:14.829611 | orchestrator | 16:09:14.829 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829615 | orchestrator | 16:09:14.829 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 16:09:14.829619 | orchestrator | 16:09:14.829 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829623 | orchestrator | 16:09:14.829 STDOUT terraform:  } 2025-09-19 16:09:14.829628 | orchestrator | 16:09:14.829 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-19 16:09:14.829632 | orchestrator | 16:09:14.829 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-19 16:09:14.829637 | orchestrator | 16:09:14.829 STDOUT terraform:  + address = (known after apply) 2025-09-19 16:09:14.829641 | orchestrator | 16:09:14.829 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.829645 | orchestrator | 16:09:14.829 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 16:09:14.829650 | orchestrator | 16:09:14.829 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.829654 | orchestrator | 16:09:14.829 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-19 16:09:14.829658 | orchestrator | 16:09:14.829 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.829663 | orchestrator | 16:09:14.829 STDOUT terraform:  + pool = "public" 2025-09-19 16:09:14.829667 | orchestrator | 16:09:14.829 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 16:09:14.829672 | orchestrator | 16:09:14.829 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.829678 | orchestrator | 16:09:14.829 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.829687 | orchestrator | 16:09:14.829 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.829691 | orchestrator | 16:09:14.829 STDOUT terraform:  } 2025-09-19 16:09:14.829695 | orchestrator | 16:09:14.829 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-19 16:09:14.829701 | orchestrator | 16:09:14.829 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-19 16:09:14.829748 | orchestrator | 16:09:14.829 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.829956 | orchestrator | 16:09:14.829 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.834197 | orchestrator | 16:09:14.829 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 16:09:14.834230 | orchestrator | 16:09:14.831 STDOUT terraform:  + "nova", 2025-09-19 16:09:14.834235 | orchestrator | 16:09:14.831 STDOUT terraform:  ] 2025-09-19 16:09:14.834239 | orchestrator | 16:09:14.831 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-19 16:09:14.834243 | orchestrator | 16:09:14.831 STDOUT terraform:  + external = (known after apply) 2025-09-19 16:09:14.834247 | orchestrator | 16:09:14.831 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.834251 | orchestrator | 16:09:14.831 STDOUT terraform:  + mtu = (known after apply) 2025-09-19 16:09:14.834255 | orchestrator | 16:09:14.831 STDOUT terraform:  + name = "net-testbed-management" 2025-09-19 16:09:14.834259 | orchestrator | 16:09:14.831 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.834262 | orchestrator | 16:09:14.831 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.834266 | orchestrator | 16:09:14.831 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.834273 | orchestrator | 16:09:14.831 STDOUT terraform:  + shared = (known after apply) 2025-09-19 16:09:14.834277 | orchestrator | 16:09:14.831 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.834281 | orchestrator | 16:09:14.831 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-19 16:09:14.834285 | orchestrator | 16:09:14.831 STDOUT terraform:  + segments (known after apply) 2025-09-19 16:09:14.834289 | orchestrator | 16:09:14.831 STDOUT terraform:  } 2025-09-19 16:09:14.834293 | orchestrator | 16:09:14.831 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-19 16:09:14.834297 | orchestrator | 16:09:14.831 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-19 16:09:14.834301 | orchestrator | 16:09:14.831 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.834305 | orchestrator | 16:09:14.831 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.834309 | orchestrator | 16:09:14.831 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.834313 | orchestrator | 16:09:14.831 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.834317 | orchestrator | 16:09:14.831 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.834329 | orchestrator | 16:09:14.831 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.834333 | orchestrator | 16:09:14.831 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.834336 | orchestrator | 16:09:14.831 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.834340 | orchestrator | 16:09:14.831 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.834344 | orchestrator | 16:09:14.831 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.834348 | orchestrator | 16:09:14.831 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.834351 | orchestrator | 16:09:14.831 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.834355 | orchestrator | 16:09:14.831 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.834359 | orchestrator | 16:09:14.832 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.834363 | orchestrator | 16:09:14.832 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.834366 | orchestrator | 16:09:14.832 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.834370 | orchestrator | 16:09:14.832 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834374 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.834385 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834404 | orchestrator | 16:09:14.832 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834408 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.834412 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834416 | orchestrator | 16:09:14.832 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.834419 | orchestrator | 16:09:14.832 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.834423 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-19 16:09:14.834427 | orchestrator | 16:09:14.832 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.834431 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834435 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834438 | orchestrator | 16:09:14.832 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-19 16:09:14.834442 | orchestrator | 16:09:14.832 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.834446 | orchestrator | 16:09:14.832 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.834450 | orchestrator | 16:09:14.832 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.834454 | orchestrator | 16:09:14.832 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.834458 | orchestrator | 16:09:14.832 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.834462 | orchestrator | 16:09:14.832 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.834469 | orchestrator | 16:09:14.832 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.834473 | orchestrator | 16:09:14.832 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.834477 | orchestrator | 16:09:14.832 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.834480 | orchestrator | 16:09:14.832 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.834487 | orchestrator | 16:09:14.832 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.834491 | orchestrator | 16:09:14.832 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.834495 | orchestrator | 16:09:14.832 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.834498 | orchestrator | 16:09:14.832 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.834502 | orchestrator | 16:09:14.832 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.834506 | orchestrator | 16:09:14.832 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.834510 | orchestrator | 16:09:14.832 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.834513 | orchestrator | 16:09:14.832 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834517 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.834521 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834525 | orchestrator | 16:09:14.832 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834528 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.834532 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834536 | orchestrator | 16:09:14.832 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834540 | orchestrator | 16:09:14.832 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.834543 | orchestrator | 16:09:14.832 STDOUT terraform:  } 2025-09-19 16:09:14.834547 | orchestrator | 16:09:14.833 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834551 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.834557 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834561 | orchestrator | 16:09:14.833 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.834565 | orchestrator | 16:09:14.833 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.834569 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-19 16:09:14.834573 | orchestrator | 16:09:14.833 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.834576 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834580 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834584 | orchestrator | 16:09:14.833 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-19 16:09:14.834588 | orchestrator | 16:09:14.833 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.834597 | orchestrator | 16:09:14.833 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.834601 | orchestrator | 16:09:14.833 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.834604 | orchestrator | 16:09:14.833 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.834610 | orchestrator | 16:09:14.833 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.834614 | orchestrator | 16:09:14.833 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.834618 | orchestrator | 16:09:14.833 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.834621 | orchestrator | 16:09:14.833 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.834625 | orchestrator | 16:09:14.833 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.834629 | orchestrator | 16:09:14.833 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.834633 | orchestrator | 16:09:14.833 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.834636 | orchestrator | 16:09:14.833 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.834640 | orchestrator | 16:09:14.833 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.834644 | orchestrator | 16:09:14.833 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.834647 | orchestrator | 16:09:14.833 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.834651 | orchestrator | 16:09:14.833 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.834655 | orchestrator | 16:09:14.833 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.834658 | orchestrator | 16:09:14.833 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834662 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.834666 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834670 | orchestrator | 16:09:14.833 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834673 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.834677 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834681 | orchestrator | 16:09:14.833 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834685 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.834688 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834692 | orchestrator | 16:09:14.833 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834696 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.834700 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834703 | orchestrator | 16:09:14.833 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.834707 | orchestrator | 16:09:14.833 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.834717 | orchestrator | 16:09:14.833 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-19 16:09:14.834721 | orchestrator | 16:09:14.833 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.834725 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834729 | orchestrator | 16:09:14.833 STDOUT terraform:  } 2025-09-19 16:09:14.834733 | orchestrator | 16:09:14.834 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-19 16:09:14.834736 | orchestrator | 16:09:14.834 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.834740 | orchestrator | 16:09:14.834 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.834744 | orchestrator | 16:09:14.834 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.834748 | orchestrator | 16:09:14.834 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.834751 | orchestrator | 16:09:14.834 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.834755 | orchestrator | 16:09:14.834 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.834759 | orchestrator | 16:09:14.834 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.834762 | orchestrator | 16:09:14.834 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.834766 | orchestrator | 16:09:14.834 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.834770 | orchestrator | 16:09:14.834 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.834774 | orchestrator | 16:09:14.834 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.834777 | orchestrator | 16:09:14.834 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.834781 | orchestrator | 16:09:14.834 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.834785 | orchestrator | 16:09:14.834 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.834788 | orchestrator | 16:09:14.834 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.834792 | orchestrator | 16:09:14.834 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.834796 | orchestrator | 16:09:14.834 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.834800 | orchestrator | 16:09:14.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834803 | orchestrator | 16:09:14.834 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.834807 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834811 | orchestrator | 16:09:14.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834815 | orchestrator | 16:09:14.834 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.834819 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834824 | orchestrator | 16:09:14.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834828 | orchestrator | 16:09:14.834 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.834835 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834838 | orchestrator | 16:09:14.834 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.834842 | orchestrator | 16:09:14.834 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.834846 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834851 | orchestrator | 16:09:14.834 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.834855 | orchestrator | 16:09:14.834 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.834860 | orchestrator | 16:09:14.834 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-19 16:09:14.834892 | orchestrator | 16:09:14.834 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.834898 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834913 | orchestrator | 16:09:14.834 STDOUT terraform:  } 2025-09-19 16:09:14.834958 | orchestrator | 16:09:14.834 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-19 16:09:14.835002 | orchestrator | 16:09:14.834 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.835037 | orchestrator | 16:09:14.834 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.835073 | orchestrator | 16:09:14.835 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.835107 | orchestrator | 16:09:14.835 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.835142 | orchestrator | 16:09:14.835 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.835177 | orchestrator | 16:09:14.835 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.835211 | orchestrator | 16:09:14.835 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.835248 | orchestrator | 16:09:14.835 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.835284 | orchestrator | 16:09:14.835 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.835319 | orchestrator | 16:09:14.835 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.835351 | orchestrator | 16:09:14.835 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.835386 | orchestrator | 16:09:14.835 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.835431 | orchestrator | 16:09:14.835 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.835465 | orchestrator | 16:09:14.835 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.835501 | orchestrator | 16:09:14.835 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.835534 | orchestrator | 16:09:14.835 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.835570 | orchestrator | 16:09:14.835 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.835589 | orchestrator | 16:09:14.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.835616 | orchestrator | 16:09:14.835 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.835627 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835645 | orchestrator | 16:09:14.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.835673 | orchestrator | 16:09:14.835 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.835687 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835706 | orchestrator | 16:09:14.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.835733 | orchestrator | 16:09:14.835 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.835740 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835761 | orchestrator | 16:09:14.835 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.835788 | orchestrator | 16:09:14.835 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.835801 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835824 | orchestrator | 16:09:14.835 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.835855 | orchestrator | 16:09:14.835 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.835859 | orchestrator | 16:09:14.835 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-19 16:09:14.835884 | orchestrator | 16:09:14.835 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.835891 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835907 | orchestrator | 16:09:14.835 STDOUT terraform:  } 2025-09-19 16:09:14.835953 | orchestrator | 16:09:14.835 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-19 16:09:14.835997 | orchestrator | 16:09:14.835 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.836031 | orchestrator | 16:09:14.835 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.836065 | orchestrator | 16:09:14.836 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.836098 | orchestrator | 16:09:14.836 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.836132 | orchestrator | 16:09:14.836 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.836168 | orchestrator | 16:09:14.836 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.836202 | orchestrator | 16:09:14.836 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.836237 | orchestrator | 16:09:14.836 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.836272 | orchestrator | 16:09:14.836 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.836307 | orchestrator | 16:09:14.836 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.836342 | orchestrator | 16:09:14.836 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.836376 | orchestrator | 16:09:14.836 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.836435 | orchestrator | 16:09:14.836 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.836454 | orchestrator | 16:09:14.836 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.836490 | orchestrator | 16:09:14.836 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.836523 | orchestrator | 16:09:14.836 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.836560 | orchestrator | 16:09:14.836 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.836579 | orchestrator | 16:09:14.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.836606 | orchestrator | 16:09:14.836 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.836613 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836637 | orchestrator | 16:09:14.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.836661 | orchestrator | 16:09:14.836 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.836668 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836690 | orchestrator | 16:09:14.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.836719 | orchestrator | 16:09:14.836 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.836725 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836747 | orchestrator | 16:09:14.836 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.836774 | orchestrator | 16:09:14.836 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.836781 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836806 | orchestrator | 16:09:14.836 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.836813 | orchestrator | 16:09:14.836 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.836841 | orchestrator | 16:09:14.836 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-19 16:09:14.836868 | orchestrator | 16:09:14.836 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.836875 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836890 | orchestrator | 16:09:14.836 STDOUT terraform:  } 2025-09-19 16:09:14.836934 | orchestrator | 16:09:14.836 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-19 16:09:14.836978 | orchestrator | 16:09:14.836 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-19 16:09:14.837013 | orchestrator | 16:09:14.836 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.837049 | orchestrator | 16:09:14.837 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-19 16:09:14.837083 | orchestrator | 16:09:14.837 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-19 16:09:14.837117 | orchestrator | 16:09:14.837 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.837151 | orchestrator | 16:09:14.837 STDOUT terraform:  + device_id = (known after apply) 2025-09-19 16:09:14.837187 | orchestrator | 16:09:14.837 STDOUT terraform:  + device_owner = (known after apply) 2025-09-19 16:09:14.837222 | orchestrator | 16:09:14.837 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-19 16:09:14.837257 | orchestrator | 16:09:14.837 STDOUT terraform:  + dns_name = (known after apply) 2025-09-19 16:09:14.837293 | orchestrator | 16:09:14.837 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.837329 | orchestrator | 16:09:14.837 STDOUT terraform:  + mac_address = (known after apply) 2025-09-19 16:09:14.837362 | orchestrator | 16:09:14.837 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.837408 | orchestrator | 16:09:14.837 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-19 16:09:14.837488 | orchestrator | 16:09:14.837 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-19 16:09:14.837522 | orchestrator | 16:09:14.837 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.837557 | orchestrator | 16:09:14.837 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-19 16:09:14.837592 | orchestrator | 16:09:14.837 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.837613 | orchestrator | 16:09:14.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.837642 | orchestrator | 16:09:14.837 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-19 16:09:14.837655 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837674 | orchestrator | 16:09:14.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.837704 | orchestrator | 16:09:14.837 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-19 16:09:14.837713 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837733 | orchestrator | 16:09:14.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.837763 | orchestrator | 16:09:14.837 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-19 16:09:14.837773 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837790 | orchestrator | 16:09:14.837 STDOUT terraform:  + allowed_address_pairs { 2025-09-19 16:09:14.837818 | orchestrator | 16:09:14.837 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-19 16:09:14.837829 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837850 | orchestrator | 16:09:14.837 STDOUT terraform:  + binding (known after apply) 2025-09-19 16:09:14.837859 | orchestrator | 16:09:14.837 STDOUT terraform:  + fixed_ip { 2025-09-19 16:09:14.837884 | orchestrator | 16:09:14.837 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-19 16:09:14.837914 | orchestrator | 16:09:14.837 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.837920 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837936 | orchestrator | 16:09:14.837 STDOUT terraform:  } 2025-09-19 16:09:14.837981 | orchestrator | 16:09:14.837 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-19 16:09:14.838064 | orchestrator | 16:09:14.837 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-19 16:09:14.838086 | orchestrator | 16:09:14.838 STDOUT terraform:  + force_destroy = false 2025-09-19 16:09:14.838113 | orchestrator | 16:09:14.838 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.838149 | orchestrator | 16:09:14.838 STDOUT terraform:  + port_id = (known after apply) 2025-09-19 16:09:14.838176 | orchestrator | 16:09:14.838 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.838204 | orchestrator | 16:09:14.838 STDOUT terraform:  + router_id = (known after apply) 2025-09-19 16:09:14.838236 | orchestrator | 16:09:14.838 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-19 16:09:14.838243 | orchestrator | 16:09:14.838 STDOUT terraform:  } 2025-09-19 16:09:14.838282 | orchestrator | 16:09:14.838 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-19 16:09:14.838318 | orchestrator | 16:09:14.838 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-19 16:09:14.838353 | orchestrator | 16:09:14.838 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-19 16:09:14.838387 | orchestrator | 16:09:14.838 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.838448 | orchestrator | 16:09:14.838 STDOUT terraform:  + availability_zone_hints = [ 2025-09-19 16:09:14.838455 | orchestrator | 16:09:14.838 STDOUT terraform:  + "nova", 2025-09-19 16:09:14.838460 | orchestrator | 16:09:14.838 STDOUT terraform:  ] 2025-09-19 16:09:14.838498 | orchestrator | 16:09:14.838 STDOUT terraform:  + distributed = (known after apply) 2025-09-19 16:09:14.838525 | orchestrator | 16:09:14.838 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-19 16:09:14.838573 | orchestrator | 16:09:14.838 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-19 16:09:14.838614 | orchestrator | 16:09:14.838 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-19 16:09:14.838642 | orchestrator | 16:09:14.838 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.838671 | orchestrator | 16:09:14.838 STDOUT terraform:  + name = "testbed" 2025-09-19 16:09:14.838707 | orchestrator | 16:09:14.838 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.838744 | orchestrator | 16:09:14.838 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.838771 | orchestrator | 16:09:14.838 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-19 16:09:14.838786 | orchestrator | 16:09:14.838 STDOUT terraform:  } 2025-09-19 16:09:14.838840 | orchestrator | 16:09:14.838 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-19 16:09:14.838892 | orchestrator | 16:09:14.838 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-19 16:09:14.838916 | orchestrator | 16:09:14.838 STDOUT terraform:  + description = "ssh" 2025-09-19 16:09:14.838945 | orchestrator | 16:09:14.838 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.838970 | orchestrator | 16:09:14.838 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.839006 | orchestrator | 16:09:14.838 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.839030 | orchestrator | 16:09:14.839 STDOUT terraform:  + port_range_max = 22 2025-09-19 16:09:14.839053 | orchestrator | 16:09:14.839 STDOUT terraform:  + port_range_min = 22 2025-09-19 16:09:14.839076 | orchestrator | 16:09:14.839 STDOUT terraform:  + protocol = "tcp" 2025-09-19 16:09:14.839112 | orchestrator | 16:09:14.839 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.839145 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_address_group_id = (known after app 2025-09-19 16:09:14.839199 | orchestrator | 16:09:14.839 STDOUT terraform: ly) 2025-09-19 16:09:14.839235 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.839266 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.839303 | orchestrator | 16:09:14.839 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.839338 | orchestrator | 16:09:14.839 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.839345 | orchestrator | 16:09:14.839 STDOUT terraform:  } 2025-09-19 16:09:14.839411 | orchestrator | 16:09:14.839 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-19 16:09:14.839463 | orchestrator | 16:09:14.839 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-19 16:09:14.839491 | orchestrator | 16:09:14.839 STDOUT terraform:  + description = "wireguard" 2025-09-19 16:09:14.839520 | orchestrator | 16:09:14.839 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.839544 | orchestrator | 16:09:14.839 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.839581 | orchestrator | 16:09:14.839 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.839606 | orchestrator | 16:09:14.839 STDOUT terraform:  + port_range_max = 51820 2025-09-19 16:09:14.839631 | orchestrator | 16:09:14.839 STDOUT terraform:  + port_range_min = 51820 2025-09-19 16:09:14.839656 | orchestrator | 16:09:14.839 STDOUT terraform:  + protocol = "udp" 2025-09-19 16:09:14.839690 | orchestrator | 16:09:14.839 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.839725 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.839759 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.839788 | orchestrator | 16:09:14.839 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.839825 | orchestrator | 16:09:14.839 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.839856 | orchestrator | 16:09:14.839 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.839863 | orchestrator | 16:09:14.839 STDOUT terraform:  } 2025-09-19 16:09:14.839916 | orchestrator | 16:09:14.839 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-19 16:09:14.839968 | orchestrator | 16:09:14.839 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-19 16:09:14.839996 | orchestrator | 16:09:14.839 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.840020 | orchestrator | 16:09:14.839 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.840055 | orchestrator | 16:09:14.840 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.840080 | orchestrator | 16:09:14.840 STDOUT terraform:  + protocol = "tcp" 2025-09-19 16:09:14.840116 | orchestrator | 16:09:14.840 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.840150 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.840185 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.840219 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 16:09:14.840276 | orchestrator | 16:09:14.840 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.840312 | orchestrator | 16:09:14.840 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.840325 | orchestrator | 16:09:14.840 STDOUT terraform:  } 2025-09-19 16:09:14.840376 | orchestrator | 16:09:14.840 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-19 16:09:14.840503 | orchestrator | 16:09:14.840 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-19 16:09:14.840532 | orchestrator | 16:09:14.840 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.840556 | orchestrator | 16:09:14.840 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.840592 | orchestrator | 16:09:14.840 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.840617 | orchestrator | 16:09:14.840 STDOUT terraform:  + protocol = "udp" 2025-09-19 16:09:14.840654 | orchestrator | 16:09:14.840 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.840688 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.840723 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.840759 | orchestrator | 16:09:14.840 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-19 16:09:14.840794 | orchestrator | 16:09:14.840 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.840829 | orchestrator | 16:09:14.840 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.840836 | orchestrator | 16:09:14.840 STDOUT terraform:  } 2025-09-19 16:09:14.840891 | orchestrator | 16:09:14.840 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-19 16:09:14.840943 | orchestrator | 16:09:14.840 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-19 16:09:14.840972 | orchestrator | 16:09:14.840 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.840996 | orchestrator | 16:09:14.840 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.841031 | orchestrator | 16:09:14.840 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.841056 | orchestrator | 16:09:14.841 STDOUT terraform:  + protocol = "icmp" 2025-09-19 16:09:14.841095 | orchestrator | 16:09:14.841 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.841127 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.841162 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.841191 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.841226 | orchestrator | 16:09:14.841 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.841264 | orchestrator | 16:09:14.841 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.841278 | orchestrator | 16:09:14.841 STDOUT terraform:  } 2025-09-19 16:09:14.841328 | orchestrator | 16:09:14.841 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-19 16:09:14.841376 | orchestrator | 16:09:14.841 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-19 16:09:14.841428 | orchestrator | 16:09:14.841 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.841435 | orchestrator | 16:09:14.841 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.841470 | orchestrator | 16:09:14.841 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.841494 | orchestrator | 16:09:14.841 STDOUT terraform:  + protocol = "tcp" 2025-09-19 16:09:14.841531 | orchestrator | 16:09:14.841 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.841565 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.841600 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.841628 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.841663 | orchestrator | 16:09:14.841 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.841698 | orchestrator | 16:09:14.841 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.841704 | orchestrator | 16:09:14.841 STDOUT terraform:  } 2025-09-19 16:09:14.841756 | orchestrator | 16:09:14.841 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-19 16:09:14.841805 | orchestrator | 16:09:14.841 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-19 16:09:14.841833 | orchestrator | 16:09:14.841 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.841857 | orchestrator | 16:09:14.841 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.841892 | orchestrator | 16:09:14.841 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.841916 | orchestrator | 16:09:14.841 STDOUT terraform:  + protocol = "udp" 2025-09-19 16:09:14.841952 | orchestrator | 16:09:14.841 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.841988 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.842044 | orchestrator | 16:09:14.841 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.842072 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.842106 | orchestrator | 16:09:14.842 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.842141 | orchestrator | 16:09:14.842 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.842155 | orchestrator | 16:09:14.842 STDOUT terraform:  } 2025-09-19 16:09:14.842205 | orchestrator | 16:09:14.842 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-19 16:09:14.842256 | orchestrator | 16:09:14.842 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-19 16:09:14.842284 | orchestrator | 16:09:14.842 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.842309 | orchestrator | 16:09:14.842 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.842345 | orchestrator | 16:09:14.842 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.842369 | orchestrator | 16:09:14.842 STDOUT terraform:  + protocol = "icmp" 2025-09-19 16:09:14.842429 | orchestrator | 16:09:14.842 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.842464 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.842500 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.842528 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.842564 | orchestrator | 16:09:14.842 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.842599 | orchestrator | 16:09:14.842 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.842606 | orchestrator | 16:09:14.842 STDOUT terraform:  } 2025-09-19 16:09:14.842657 | orchestrator | 16:09:14.842 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-19 16:09:14.842705 | orchestrator | 16:09:14.842 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-19 16:09:14.842729 | orchestrator | 16:09:14.842 STDOUT terraform:  + description = "vrrp" 2025-09-19 16:09:14.842759 | orchestrator | 16:09:14.842 STDOUT terraform:  + direction = "ingress" 2025-09-19 16:09:14.842802 | orchestrator | 16:09:14.842 STDOUT terraform:  + ethertype = "IPv4" 2025-09-19 16:09:14.842820 | orchestrator | 16:09:14.842 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.842844 | orchestrator | 16:09:14.842 STDOUT terraform:  + protocol = "112" 2025-09-19 16:09:14.842881 | orchestrator | 16:09:14.842 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.842915 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-19 16:09:14.842951 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-19 16:09:14.842980 | orchestrator | 16:09:14.842 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-19 16:09:14.843016 | orchestrator | 16:09:14.842 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-19 16:09:14.843051 | orchestrator | 16:09:14.843 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.843058 | orchestrator | 16:09:14.843 STDOUT terraform:  } 2025-09-19 16:09:14.843108 | orchestrator | 16:09:14.843 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-19 16:09:14.843156 | orchestrator | 16:09:14.843 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-19 16:09:14.843185 | orchestrator | 16:09:14.843 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.843218 | orchestrator | 16:09:14.843 STDOUT terraform:  + description = "management security group" 2025-09-19 16:09:14.843246 | orchestrator | 16:09:14.843 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.843274 | orchestrator | 16:09:14.843 STDOUT terraform:  + name = "testbed-management" 2025-09-19 16:09:14.843302 | orchestrator | 16:09:14.843 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.843331 | orchestrator | 16:09:14.843 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 16:09:14.843358 | orchestrator | 16:09:14.843 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.843364 | orchestrator | 16:09:14.843 STDOUT terraform:  } 2025-09-19 16:09:14.843424 | orchestrator | 16:09:14.843 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-19 16:09:14.843472 | orchestrator | 16:09:14.843 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-19 16:09:14.843496 | orchestrator | 16:09:14.843 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.843523 | orchestrator | 16:09:14.843 STDOUT terraform:  + description = "node security group" 2025-09-19 16:09:14.843551 | orchestrator | 16:09:14.843 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.843574 | orchestrator | 16:09:14.843 STDOUT terraform:  + name = "testbed-node" 2025-09-19 16:09:14.843602 | orchestrator | 16:09:14.843 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.843629 | orchestrator | 16:09:14.843 STDOUT terraform:  + stateful = (known after apply) 2025-09-19 16:09:14.843657 | orchestrator | 16:09:14.843 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.843664 | orchestrator | 16:09:14.843 STDOUT terraform:  } 2025-09-19 16:09:14.843709 | orchestrator | 16:09:14.843 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-19 16:09:14.843752 | orchestrator | 16:09:14.843 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-19 16:09:14.843782 | orchestrator | 16:09:14.843 STDOUT terraform:  + all_tags = (known after apply) 2025-09-19 16:09:14.843810 | orchestrator | 16:09:14.843 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-19 16:09:14.843829 | orchestrator | 16:09:14.843 STDOUT terraform:  + dns_nameservers = [ 2025-09-19 16:09:14.843848 | orchestrator | 16:09:14.843 STDOUT terraform:  + "8.8.8.8", 2025-09-19 16:09:14.843859 | orchestrator | 16:09:14.843 STDOUT terraform:  + "9.9.9.9", 2025-09-19 16:09:14.843874 | orchestrator | 16:09:14.843 STDOUT terraform:  ] 2025-09-19 16:09:14.843895 | orchestrator | 16:09:14.843 STDOUT terraform:  + enable_dhcp = true 2025-09-19 16:09:14.843925 | orchestrator | 16:09:14.843 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-19 16:09:14.843955 | orchestrator | 16:09:14.843 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.843975 | orchestrator | 16:09:14.843 STDOUT terraform:  + ip_version = 4 2025-09-19 16:09:14.844004 | orchestrator | 16:09:14.843 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-19 16:09:14.844034 | orchestrator | 16:09:14.844 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-19 16:09:14.844071 | orchestrator | 16:09:14.844 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-19 16:09:14.844100 | orchestrator | 16:09:14.844 STDOUT terraform:  + network_id = (known after apply) 2025-09-19 16:09:14.844120 | orchestrator | 16:09:14.844 STDOUT terraform:  + no_gateway = false 2025-09-19 16:09:14.844151 | orchestrator | 16:09:14.844 STDOUT terraform:  + region = (known after apply) 2025-09-19 16:09:14.844180 | orchestrator | 16:09:14.844 STDOUT terraform:  + service_types = (known after apply) 2025-09-19 16:09:14.844209 | orchestrator | 16:09:14.844 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-19 16:09:14.844227 | orchestrator | 16:09:14.844 STDOUT terraform:  + allocation_pool { 2025-09-19 16:09:14.844250 | orchestrator | 16:09:14.844 STDOUT terraform:  + end = "192.168.31.250" 2025-09-19 16:09:14.844273 | orchestrator | 16:09:14.844 STDOUT terraform:  + start = "192.168.31.200" 2025-09-19 16:09:14.844286 | orchestrator | 16:09:14.844 STDOUT terraform:  } 2025-09-19 16:09:14.844292 | orchestrator | 16:09:14.844 STDOUT terraform:  } 2025-09-19 16:09:14.844318 | orchestrator | 16:09:14.844 STDOUT terraform:  # terraform_data.image will be created 2025-09-19 16:09:14.844342 | orchestrator | 16:09:14.844 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-19 16:09:14.844365 | orchestrator | 16:09:14.844 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.844384 | orchestrator | 16:09:14.844 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 16:09:14.844422 | orchestrator | 16:09:14.844 STDOUT terraform:  + output = (known after apply) 2025-09-19 16:09:14.844427 | orchestrator | 16:09:14.844 STDOUT terraform:  } 2025-09-19 16:09:14.844449 | orchestrator | 16:09:14.844 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-19 16:09:14.844474 | orchestrator | 16:09:14.844 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-19 16:09:14.844497 | orchestrator | 16:09:14.844 STDOUT terraform:  + id = (known after apply) 2025-09-19 16:09:14.844516 | orchestrator | 16:09:14.844 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-19 16:09:14.844539 | orchestrator | 16:09:14.844 STDOUT terraform:  + output = (known after apply) 2025-09-19 16:09:14.844553 | orchestrator | 16:09:14.844 STDOUT terraform:  } 2025-09-19 16:09:14.844582 | orchestrator | 16:09:14.844 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-19 16:09:14.844592 | orchestrator | 16:09:14.844 STDOUT terraform: Changes to Outputs: 2025-09-19 16:09:14.844617 | orchestrator | 16:09:14.844 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-19 16:09:14.844640 | orchestrator | 16:09:14.844 STDOUT terraform:  + private_key = (sensitive value) 2025-09-19 16:09:14.944442 | orchestrator | 16:09:14.944 STDOUT terraform: terraform_data.image: Creating... 2025-09-19 16:09:14.944496 | orchestrator | 16:09:14.944 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f1b81d47-90fd-b5e4-66e1-0b601335b52b] 2025-09-19 16:09:14.944504 | orchestrator | 16:09:14.944 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-19 16:09:14.944532 | orchestrator | 16:09:14.944 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=92c0223d-5d5d-0161-d0b5-33a0f8b4c32e] 2025-09-19 16:09:15.013860 | orchestrator | 16:09:15.013 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-19 16:09:15.016344 | orchestrator | 16:09:15.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-19 16:09:15.017702 | orchestrator | 16:09:15.017 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-19 16:09:15.020320 | orchestrator | 16:09:15.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-19 16:09:15.021080 | orchestrator | 16:09:15.020 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-19 16:09:15.021282 | orchestrator | 16:09:15.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-19 16:09:15.021902 | orchestrator | 16:09:15.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-19 16:09:15.021972 | orchestrator | 16:09:15.021 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-19 16:09:15.022264 | orchestrator | 16:09:15.022 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-19 16:09:15.023642 | orchestrator | 16:09:15.023 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-19 16:09:15.614575 | orchestrator | 16:09:15.614 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-09-19 16:09:15.622117 | orchestrator | 16:09:15.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-19 16:09:16.107337 | orchestrator | 16:09:16.107 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=ab0fda08-0040-4fa4-bb4d-c216a27621fa] 2025-09-19 16:09:16.110625 | orchestrator | 16:09:16.110 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-19 16:09:16.172761 | orchestrator | 16:09:16.172 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 16:09:16.175805 | orchestrator | 16:09:16.175 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-19 16:09:16.227992 | orchestrator | 16:09:16.227 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-19 16:09:16.240652 | orchestrator | 16:09:16.240 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-19 16:09:16.244652 | orchestrator | 16:09:16.244 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=5ab7fc008887dadfcb8bca7a7ef70cf090013345] 2025-09-19 16:09:16.251032 | orchestrator | 16:09:16.250 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-19 16:09:16.258864 | orchestrator | 16:09:16.258 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=26bb8b25fe864c7fa9c1ab2ea8af4d33f0e91a1e] 2025-09-19 16:09:16.265434 | orchestrator | 16:09:16.265 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-19 16:09:17.088301 | orchestrator | 16:09:17.087 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=00b01a0c-31d7-43c9-9cbd-60d9eef653d5] 2025-09-19 16:09:17.098201 | orchestrator | 16:09:17.097 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-19 16:09:18.715526 | orchestrator | 16:09:18.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=8ef3193b-7b85-4a69-91dc-ff1919c1d0b3] 2025-09-19 16:09:18.716852 | orchestrator | 16:09:18.711 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=49605ec5-af84-4e56-b6e7-0932efbf1bcd] 2025-09-19 16:09:18.721889 | orchestrator | 16:09:18.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-19 16:09:18.721953 | orchestrator | 16:09:18.721 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-19 16:09:18.725213 | orchestrator | 16:09:18.725 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=9516e090-09d3-47b2-a672-12f5ce683363] 2025-09-19 16:09:18.732837 | orchestrator | 16:09:18.732 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-19 16:09:18.747822 | orchestrator | 16:09:18.747 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=8c3574da-2fac-4f58-bc83-f51ba9425a73] 2025-09-19 16:09:18.751930 | orchestrator | 16:09:18.751 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=8547d473-0710-428a-9585-3879cf611acd] 2025-09-19 16:09:18.753784 | orchestrator | 16:09:18.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-19 16:09:18.758349 | orchestrator | 16:09:18.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-19 16:09:18.763068 | orchestrator | 16:09:18.762 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 4s [id=ea7e2490-24d2-49b7-b6d3-38bb6098dff1] 2025-09-19 16:09:18.771310 | orchestrator | 16:09:18.771 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-19 16:09:18.776607 | orchestrator | 16:09:18.776 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=5e704911-d475-45db-a46e-b2c1a2edd26e] 2025-09-19 16:09:18.781340 | orchestrator | 16:09:18.781 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-19 16:09:18.794663 | orchestrator | 16:09:18.794 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=bc231350-c60d-45ad-9b08-eb0e8cdec0b5] 2025-09-19 16:09:19.055330 | orchestrator | 16:09:19.054 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=bfd7083e-59a5-451a-9789-189314eae1f5] 2025-09-19 16:09:20.444337 | orchestrator | 16:09:20.443 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=a64d2943-68d6-43ca-9e98-c6f4ed260dcf] 2025-09-19 16:09:22.005909 | orchestrator | 16:09:22.005 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 3s [id=52363f13-d050-4d81-ad59-b01708cddb48] 2025-09-19 16:09:22.013703 | orchestrator | 16:09:22.013 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-19 16:09:22.013812 | orchestrator | 16:09:22.013 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-19 16:09:22.015204 | orchestrator | 16:09:22.014 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-19 16:09:22.123565 | orchestrator | 16:09:22.123 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=67fe6e8c-959c-4183-b14f-1847ba00206a] 2025-09-19 16:09:22.153793 | orchestrator | 16:09:22.153 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=bdf17f48-750a-4da2-b9bc-22b260044989] 2025-09-19 16:09:22.181214 | orchestrator | 16:09:22.180 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=e87ac32e-cfe1-4641-bda3-fc317b60eb0f] 2025-09-19 16:09:22.185487 | orchestrator | 16:09:22.185 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 3s [id=3bce6968-b173-46ae-973f-b101cd95971f] 2025-09-19 16:09:22.192115 | orchestrator | 16:09:22.191 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=a82e8093-3de3-4a12-a6e9-17b4f73e23a8] 2025-09-19 16:09:22.198627 | orchestrator | 16:09:22.198 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-19 16:09:22.199191 | orchestrator | 16:09:22.198 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-19 16:09:22.206551 | orchestrator | 16:09:22.206 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=716b3d72-a126-4679-914a-2f4586f413fc] 2025-09-19 16:09:22.207962 | orchestrator | 16:09:22.207 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-19 16:09:22.212868 | orchestrator | 16:09:22.212 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-19 16:09:22.213115 | orchestrator | 16:09:22.212 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-19 16:09:22.221653 | orchestrator | 16:09:22.221 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-19 16:09:22.236057 | orchestrator | 16:09:22.235 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=2d9a821d-e10a-4060-b00f-79b257d4e791] 2025-09-19 16:09:22.245007 | orchestrator | 16:09:22.244 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-19 16:09:22.245790 | orchestrator | 16:09:22.245 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-19 16:09:22.303547 | orchestrator | 16:09:22.303 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=91605ab8-a526-47f7-b42b-efc568288447] 2025-09-19 16:09:22.309997 | orchestrator | 16:09:22.309 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-19 16:09:22.431421 | orchestrator | 16:09:22.431 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f021368c-5187-4fa9-aa05-e96d5650bd9e] 2025-09-19 16:09:22.447101 | orchestrator | 16:09:22.446 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-19 16:09:22.576540 | orchestrator | 16:09:22.576 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=983b1c22-39c2-404f-9317-49894e0575cc] 2025-09-19 16:09:22.587857 | orchestrator | 16:09:22.587 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-19 16:09:22.613357 | orchestrator | 16:09:22.613 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=b9edfd2c-d719-4902-a378-7b587a73c649] 2025-09-19 16:09:22.632670 | orchestrator | 16:09:22.632 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-19 16:09:22.784442 | orchestrator | 16:09:22.784 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=9dcb1239-3e16-4ce6-85ea-09244534fb51] 2025-09-19 16:09:22.797798 | orchestrator | 16:09:22.797 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-19 16:09:22.924716 | orchestrator | 16:09:22.924 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=7114f50e-dd4e-470a-b863-a591990bac1b] 2025-09-19 16:09:22.939882 | orchestrator | 16:09:22.939 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-19 16:09:23.125870 | orchestrator | 16:09:23.125 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=2b450250-426c-42a1-8033-a38f3efdc280] 2025-09-19 16:09:23.139790 | orchestrator | 16:09:23.139 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-19 16:09:23.382084 | orchestrator | 16:09:23.381 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 0s [id=e471a9f6-4016-450d-a81b-c5dcd8afbb8e] 2025-09-19 16:09:23.393751 | orchestrator | 16:09:23.393 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=96d9c73b-0b8e-4d00-abdd-902ac2b56424] 2025-09-19 16:09:23.397949 | orchestrator | 16:09:23.397 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-19 16:09:23.492701 | orchestrator | 16:09:23.492 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=b0a9a23b-e12d-4d39-863b-60351d7b9f62] 2025-09-19 16:09:23.510269 | orchestrator | 16:09:23.509 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=ffcdef78-a146-459f-a7c1-6ffd06ca9357] 2025-09-19 16:09:23.591764 | orchestrator | 16:09:23.591 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=2be170b4-2013-4b68-9aec-a458a27a5d81] 2025-09-19 16:09:23.606202 | orchestrator | 16:09:23.605 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=41d0b8b8-b359-4cce-bf71-5a9728a87d4e] 2025-09-19 16:09:23.701230 | orchestrator | 16:09:23.700 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 2s [id=1b7d3369-41fa-47a2-b050-f6d54a973478] 2025-09-19 16:09:23.917783 | orchestrator | 16:09:23.917 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=236694e6-609c-4448-b14f-a4733f5c6b54] 2025-09-19 16:09:23.968456 | orchestrator | 16:09:23.968 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=20c9a96c-317b-473b-9919-afbf5fee4aa2] 2025-09-19 16:09:23.989472 | orchestrator | 16:09:23.989 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=8750fb2d-e3a5-443d-b25d-7170c2c94e26] 2025-09-19 16:09:24.576342 | orchestrator | 16:09:24.575 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=7967e671-068b-4609-8411-5a33ef469850] 2025-09-19 16:09:24.600776 | orchestrator | 16:09:24.600 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-19 16:09:24.612610 | orchestrator | 16:09:24.612 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-19 16:09:24.617731 | orchestrator | 16:09:24.617 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-19 16:09:24.632890 | orchestrator | 16:09:24.632 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-19 16:09:24.632945 | orchestrator | 16:09:24.632 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-19 16:09:24.632951 | orchestrator | 16:09:24.632 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-19 16:09:24.632956 | orchestrator | 16:09:24.632 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-19 16:09:26.523104 | orchestrator | 16:09:26.522 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=5a54db39-3658-4c71-b59b-ae4bc1894b3b] 2025-09-19 16:09:26.530747 | orchestrator | 16:09:26.530 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-19 16:09:26.538671 | orchestrator | 16:09:26.538 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-19 16:09:26.539105 | orchestrator | 16:09:26.539 STDOUT terraform: local_file.inventory: Creating... 2025-09-19 16:09:26.542795 | orchestrator | 16:09:26.542 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=124deaf7d086e073fc2ce8fe3a36ec338e7c5046] 2025-09-19 16:09:26.546307 | orchestrator | 16:09:26.546 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de66e7fd81458aef650b9e29b7f3a0d23514916d] 2025-09-19 16:09:27.407211 | orchestrator | 16:09:27.406 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=5a54db39-3658-4c71-b59b-ae4bc1894b3b] 2025-09-19 16:09:34.620680 | orchestrator | 16:09:34.620 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-19 16:09:34.623863 | orchestrator | 16:09:34.623 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-19 16:09:34.636201 | orchestrator | 16:09:34.635 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-19 16:09:34.636278 | orchestrator | 16:09:34.636 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-19 16:09:34.636475 | orchestrator | 16:09:34.636 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-19 16:09:34.636585 | orchestrator | 16:09:34.636 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-19 16:09:44.622187 | orchestrator | 16:09:44.621 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-19 16:09:44.624372 | orchestrator | 16:09:44.624 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-19 16:09:44.636811 | orchestrator | 16:09:44.636 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-19 16:09:44.636899 | orchestrator | 16:09:44.636 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-19 16:09:44.637043 | orchestrator | 16:09:44.636 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-19 16:09:44.637143 | orchestrator | 16:09:44.636 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-19 16:09:45.193270 | orchestrator | 16:09:45.192 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=e2a932c0-7c89-404c-8b31-081927aa09e7] 2025-09-19 16:09:45.306475 | orchestrator | 16:09:45.305 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=41b8e978-01d9-4964-a274-04bfcf533195] 2025-09-19 16:09:54.623333 | orchestrator | 16:09:54.623 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-09-19 16:09:54.637523 | orchestrator | 16:09:54.637 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-19 16:09:54.637573 | orchestrator | 16:09:54.637 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-19 16:09:54.637663 | orchestrator | 16:09:54.637 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-09-19 16:09:55.444402 | orchestrator | 16:09:55.444 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=3367b3f9-2e57-40d6-a216-3ef347ade7f8] 2025-09-19 16:09:55.449818 | orchestrator | 16:09:55.449 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=7907eaea-d0c5-4216-93ea-88bfbce13bac] 2025-09-19 16:09:55.478662 | orchestrator | 16:09:55.478 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 30s [id=57cc0cce-24c0-4e4a-8577-a08cf30da339] 2025-09-19 16:10:04.638713 | orchestrator | 16:10:04.638 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-09-19 16:10:05.641156 | orchestrator | 16:10:05.640 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 41s [id=6cb238ca-dcb3-44fc-87df-6a4b3e2a001d] 2025-09-19 16:10:05.676105 | orchestrator | 16:10:05.675 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-19 16:10:05.677056 | orchestrator | 16:10:05.676 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-19 16:10:05.677082 | orchestrator | 16:10:05.676 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-19 16:10:05.681344 | orchestrator | 16:10:05.681 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4569172889001918478] 2025-09-19 16:10:05.681738 | orchestrator | 16:10:05.681 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-19 16:10:05.683764 | orchestrator | 16:10:05.683 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-19 16:10:05.684130 | orchestrator | 16:10:05.684 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-19 16:10:05.691176 | orchestrator | 16:10:05.691 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-19 16:10:05.699963 | orchestrator | 16:10:05.699 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-19 16:10:05.700349 | orchestrator | 16:10:05.700 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-19 16:10:05.709492 | orchestrator | 16:10:05.709 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-19 16:10:05.716215 | orchestrator | 16:10:05.716 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-19 16:10:09.067179 | orchestrator | 16:10:09.066 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=7907eaea-d0c5-4216-93ea-88bfbce13bac/bfd7083e-59a5-451a-9789-189314eae1f5] 2025-09-19 16:10:09.068655 | orchestrator | 16:10:09.068 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=3367b3f9-2e57-40d6-a216-3ef347ade7f8/bc231350-c60d-45ad-9b08-eb0e8cdec0b5] 2025-09-19 16:10:09.111014 | orchestrator | 16:10:09.110 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=41b8e978-01d9-4964-a274-04bfcf533195/8ef3193b-7b85-4a69-91dc-ff1919c1d0b3] 2025-09-19 16:10:09.114104 | orchestrator | 16:10:09.113 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=3367b3f9-2e57-40d6-a216-3ef347ade7f8/ea7e2490-24d2-49b7-b6d3-38bb6098dff1] 2025-09-19 16:10:09.139564 | orchestrator | 16:10:09.139 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=7907eaea-d0c5-4216-93ea-88bfbce13bac/9516e090-09d3-47b2-a672-12f5ce683363] 2025-09-19 16:10:09.142472 | orchestrator | 16:10:09.142 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=41b8e978-01d9-4964-a274-04bfcf533195/8547d473-0710-428a-9585-3879cf611acd] 2025-09-19 16:10:15.241042 | orchestrator | 16:10:15.240 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=3367b3f9-2e57-40d6-a216-3ef347ade7f8/5e704911-d475-45db-a46e-b2c1a2edd26e] 2025-09-19 16:10:15.259907 | orchestrator | 16:10:15.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=7907eaea-d0c5-4216-93ea-88bfbce13bac/49605ec5-af84-4e56-b6e7-0932efbf1bcd] 2025-09-19 16:10:15.273617 | orchestrator | 16:10:15.273 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=41b8e978-01d9-4964-a274-04bfcf533195/8c3574da-2fac-4f58-bc83-f51ba9425a73] 2025-09-19 16:10:15.718277 | orchestrator | 16:10:15.717 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-19 16:10:25.720527 | orchestrator | 16:10:25.720 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-19 16:10:26.138847 | orchestrator | 16:10:26.138 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=bc0b4eae-76b1-4631-8ab8-6d0bb2af16a2] 2025-09-19 16:10:26.169857 | orchestrator | 16:10:26.169 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-19 16:10:26.170117 | orchestrator | 16:10:26.169 STDOUT terraform: Outputs: 2025-09-19 16:10:26.170174 | orchestrator | 16:10:26.169 STDOUT terraform: manager_address = 2025-09-19 16:10:26.170188 | orchestrator | 16:10:26.169 STDOUT terraform: private_key = 2025-09-19 16:10:26.582392 | orchestrator | ok: Runtime: 0:01:17.291615 2025-09-19 16:10:26.622393 | 2025-09-19 16:10:26.622575 | TASK [Create infrastructure (stable)] 2025-09-19 16:10:27.159983 | orchestrator | skipping: Conditional result was False 2025-09-19 16:10:27.177278 | 2025-09-19 16:10:27.177425 | TASK [Fetch manager address] 2025-09-19 16:10:27.602049 | orchestrator | ok 2025-09-19 16:10:27.612146 | 2025-09-19 16:10:27.612282 | TASK [Set manager_host address] 2025-09-19 16:10:27.681351 | orchestrator | ok 2025-09-19 16:10:27.690763 | 2025-09-19 16:10:27.690956 | LOOP [Update ansible collections] 2025-09-19 16:10:28.757441 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 16:10:28.757711 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 16:10:28.757748 | orchestrator | Starting galaxy collection install process 2025-09-19 16:10:28.757773 | orchestrator | Process install dependency map 2025-09-19 16:10:28.757794 | orchestrator | Starting collection install process 2025-09-19 16:10:28.757815 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-09-19 16:10:28.757839 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-09-19 16:10:28.757867 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-19 16:10:28.757934 | orchestrator | ok: Item: commons Runtime: 0:00:00.774238 2025-09-19 16:10:29.577684 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 16:10:29.577864 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-19 16:10:29.577942 | orchestrator | Starting galaxy collection install process 2025-09-19 16:10:29.577987 | orchestrator | Process install dependency map 2025-09-19 16:10:29.578028 | orchestrator | Starting collection install process 2025-09-19 16:10:29.578066 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-09-19 16:10:29.578105 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-09-19 16:10:29.578141 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-19 16:10:29.578200 | orchestrator | ok: Item: services Runtime: 0:00:00.555532 2025-09-19 16:10:29.593190 | 2025-09-19 16:10:29.593313 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 16:10:40.169942 | orchestrator | ok 2025-09-19 16:10:40.179713 | 2025-09-19 16:10:40.179824 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 16:11:40.219953 | orchestrator | ok 2025-09-19 16:11:40.230233 | 2025-09-19 16:11:40.230344 | TASK [Fetch manager ssh hostkey] 2025-09-19 16:11:41.800024 | orchestrator | Output suppressed because no_log was given 2025-09-19 16:11:41.817007 | 2025-09-19 16:11:41.817188 | TASK [Get ssh keypair from terraform environment] 2025-09-19 16:11:42.354405 | orchestrator | ok: Runtime: 0:00:00.009779 2025-09-19 16:11:42.370484 | 2025-09-19 16:11:42.370649 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 16:11:42.408828 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-19 16:11:42.419228 | 2025-09-19 16:11:42.419357 | TASK [Run manager part 0] 2025-09-19 16:11:43.260786 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 16:11:43.304836 | orchestrator | 2025-09-19 16:11:43.304885 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-19 16:11:43.304892 | orchestrator | 2025-09-19 16:11:43.304906 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-19 16:11:45.122827 | orchestrator | ok: [testbed-manager] 2025-09-19 16:11:45.122874 | orchestrator | 2025-09-19 16:11:45.122892 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 16:11:45.122901 | orchestrator | 2025-09-19 16:11:45.122909 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:11:47.020806 | orchestrator | ok: [testbed-manager] 2025-09-19 16:11:47.020895 | orchestrator | 2025-09-19 16:11:47.020907 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 16:11:47.893752 | orchestrator | ok: [testbed-manager] 2025-09-19 16:11:47.893849 | orchestrator | 2025-09-19 16:11:47.893864 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 16:11:48.311262 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311361 | orchestrator | 2025-09-19 16:11:48.311400 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-19 16:11:48.311413 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311423 | orchestrator | 2025-09-19 16:11:48.311433 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 16:11:48.311443 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311452 | orchestrator | 2025-09-19 16:11:48.311462 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 16:11:48.311482 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311493 | orchestrator | 2025-09-19 16:11:48.311502 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 16:11:48.311511 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311521 | orchestrator | 2025-09-19 16:11:48.311530 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-19 16:11:48.311540 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311549 | orchestrator | 2025-09-19 16:11:48.311559 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-19 16:11:48.311569 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:11:48.311578 | orchestrator | 2025-09-19 16:11:48.311587 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-19 16:11:49.079778 | orchestrator | changed: [testbed-manager] 2025-09-19 16:11:49.079847 | orchestrator | 2025-09-19 16:11:49.079860 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-19 16:14:14.759042 | orchestrator | changed: [testbed-manager] 2025-09-19 16:14:14.759107 | orchestrator | 2025-09-19 16:14:14.759125 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 16:15:29.886615 | orchestrator | changed: [testbed-manager] 2025-09-19 16:15:29.886803 | orchestrator | 2025-09-19 16:15:29.886837 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-19 16:15:53.651783 | orchestrator | changed: [testbed-manager] 2025-09-19 16:15:53.651828 | orchestrator | 2025-09-19 16:15:53.651839 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-19 16:16:02.192204 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:02.192250 | orchestrator | 2025-09-19 16:16:02.192259 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 16:16:02.238165 | orchestrator | ok: [testbed-manager] 2025-09-19 16:16:02.238203 | orchestrator | 2025-09-19 16:16:02.238211 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-19 16:16:03.020725 | orchestrator | ok: [testbed-manager] 2025-09-19 16:16:03.020766 | orchestrator | 2025-09-19 16:16:03.020776 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-19 16:16:03.752082 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:03.752168 | orchestrator | 2025-09-19 16:16:03.752182 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-19 16:16:10.116844 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:10.116938 | orchestrator | 2025-09-19 16:16:10.116981 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-19 16:16:16.406619 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:16.406705 | orchestrator | 2025-09-19 16:16:16.406723 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-19 16:16:19.139659 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:19.140426 | orchestrator | 2025-09-19 16:16:19.140446 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-19 16:16:20.891439 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:20.891526 | orchestrator | 2025-09-19 16:16:20.891542 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-19 16:16:21.987522 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 16:16:21.987606 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 16:16:21.987628 | orchestrator | 2025-09-19 16:16:21.987649 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-19 16:16:22.028228 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 16:16:22.028275 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 16:16:22.028281 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 16:16:22.028285 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 16:16:25.961335 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-19 16:16:25.961448 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-19 16:16:25.961460 | orchestrator | 2025-09-19 16:16:25.961469 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-19 16:16:26.514486 | orchestrator | changed: [testbed-manager] 2025-09-19 16:16:26.514570 | orchestrator | 2025-09-19 16:16:26.514585 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-19 16:20:48.736335 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-19 16:20:48.736387 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-19 16:20:48.736397 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-19 16:20:48.736405 | orchestrator | 2025-09-19 16:20:48.736413 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-19 16:20:51.004568 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-19 16:20:51.004634 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-19 16:20:51.004642 | orchestrator | 2025-09-19 16:20:51.004650 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-19 16:20:51.004657 | orchestrator | 2025-09-19 16:20:51.004663 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:20:52.476105 | orchestrator | ok: [testbed-manager] 2025-09-19 16:20:52.476175 | orchestrator | 2025-09-19 16:20:52.476188 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 16:20:52.522337 | orchestrator | ok: [testbed-manager] 2025-09-19 16:20:52.522395 | orchestrator | 2025-09-19 16:20:52.522405 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 16:20:52.576600 | orchestrator | ok: [testbed-manager] 2025-09-19 16:20:52.576641 | orchestrator | 2025-09-19 16:20:52.576649 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 16:20:53.350697 | orchestrator | changed: [testbed-manager] 2025-09-19 16:20:53.350780 | orchestrator | 2025-09-19 16:20:53.350796 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 16:20:54.082193 | orchestrator | changed: [testbed-manager] 2025-09-19 16:20:54.082237 | orchestrator | 2025-09-19 16:20:54.082246 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 16:20:55.455216 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-19 16:20:55.455333 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-19 16:20:55.455349 | orchestrator | 2025-09-19 16:20:55.455378 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 16:20:56.752171 | orchestrator | changed: [testbed-manager] 2025-09-19 16:20:56.752262 | orchestrator | 2025-09-19 16:20:56.752272 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 16:20:58.509801 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:20:58.509895 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-19 16:20:58.509908 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:20:58.509920 | orchestrator | 2025-09-19 16:20:58.509933 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 16:20:58.562952 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:20:58.563039 | orchestrator | 2025-09-19 16:20:58.563054 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 16:20:59.111591 | orchestrator | changed: [testbed-manager] 2025-09-19 16:20:59.112392 | orchestrator | 2025-09-19 16:20:59.112436 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 16:20:59.178535 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:20:59.178603 | orchestrator | 2025-09-19 16:20:59.178613 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 16:21:00.022063 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:21:00.022157 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:00.022173 | orchestrator | 2025-09-19 16:21:00.022185 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 16:21:00.059193 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:00.059283 | orchestrator | 2025-09-19 16:21:00.059324 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 16:21:00.092489 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:00.092568 | orchestrator | 2025-09-19 16:21:00.092582 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 16:21:00.119841 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:00.119930 | orchestrator | 2025-09-19 16:21:00.119945 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 16:21:00.165504 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:00.165596 | orchestrator | 2025-09-19 16:21:00.165614 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 16:21:00.888682 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:00.888728 | orchestrator | 2025-09-19 16:21:00.888733 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-19 16:21:00.888738 | orchestrator | 2025-09-19 16:21:00.888742 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:21:02.312913 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:02.312977 | orchestrator | 2025-09-19 16:21:02.312991 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-19 16:21:03.276619 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:03.276655 | orchestrator | 2025-09-19 16:21:03.276660 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:21:03.276667 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-19 16:21:03.276671 | orchestrator | 2025-09-19 16:21:03.797328 | orchestrator | ok: Runtime: 0:09:20.685766 2025-09-19 16:21:03.823218 | 2025-09-19 16:21:03.823491 | TASK [Point out that the log in on the manager is now possible] 2025-09-19 16:21:03.862627 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-19 16:21:03.874617 | 2025-09-19 16:21:03.874786 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-19 16:21:03.915811 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-19 16:21:03.925646 | 2025-09-19 16:21:03.925783 | TASK [Run manager part 1 + 2] 2025-09-19 16:21:04.741456 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-19 16:21:04.796919 | orchestrator | 2025-09-19 16:21:04.796972 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-19 16:21:04.796979 | orchestrator | 2025-09-19 16:21:04.796993 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:21:07.718263 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:07.718371 | orchestrator | 2025-09-19 16:21:07.718415 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-19 16:21:07.753723 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:07.753789 | orchestrator | 2025-09-19 16:21:07.753804 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-19 16:21:07.793448 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:07.793511 | orchestrator | 2025-09-19 16:21:07.793524 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 16:21:07.832997 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:07.833094 | orchestrator | 2025-09-19 16:21:07.833112 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 16:21:07.894532 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:07.894585 | orchestrator | 2025-09-19 16:21:07.894593 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 16:21:07.951377 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:07.951430 | orchestrator | 2025-09-19 16:21:07.951438 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 16:21:07.994451 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-19 16:21:07.994502 | orchestrator | 2025-09-19 16:21:07.994508 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 16:21:08.723427 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:08.723594 | orchestrator | 2025-09-19 16:21:08.723614 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 16:21:08.773851 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:08.773912 | orchestrator | 2025-09-19 16:21:08.773921 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 16:21:10.104961 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:10.105044 | orchestrator | 2025-09-19 16:21:10.105062 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 16:21:10.670935 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:10.671009 | orchestrator | 2025-09-19 16:21:10.671024 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 16:21:11.796841 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:11.796889 | orchestrator | 2025-09-19 16:21:11.796903 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 16:21:28.593368 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:28.593463 | orchestrator | 2025-09-19 16:21:28.593480 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-19 16:21:29.254891 | orchestrator | ok: [testbed-manager] 2025-09-19 16:21:29.254974 | orchestrator | 2025-09-19 16:21:29.254992 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-19 16:21:29.299069 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:29.299154 | orchestrator | 2025-09-19 16:21:29.299170 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-19 16:21:30.244698 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:30.244740 | orchestrator | 2025-09-19 16:21:30.244748 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-19 16:21:31.189160 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:31.189244 | orchestrator | 2025-09-19 16:21:31.189259 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-19 16:21:31.755943 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:31.756048 | orchestrator | 2025-09-19 16:21:31.756063 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-19 16:21:31.794266 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-19 16:21:31.794404 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-19 16:21:31.794421 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-19 16:21:31.794433 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-19 16:21:33.862240 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:33.862363 | orchestrator | 2025-09-19 16:21:33.862381 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-19 16:21:42.653182 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-19 16:21:42.935527 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-19 16:21:42.935602 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-19 16:21:42.935616 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-19 16:21:42.935640 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-19 16:21:42.935651 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-19 16:21:42.935662 | orchestrator | 2025-09-19 16:21:42.935675 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-19 16:21:43.881496 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:43.881588 | orchestrator | 2025-09-19 16:21:43.881604 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-19 16:21:43.921326 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:43.921384 | orchestrator | 2025-09-19 16:21:43.921393 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-19 16:21:47.043538 | orchestrator | changed: [testbed-manager] 2025-09-19 16:21:47.043606 | orchestrator | 2025-09-19 16:21:47.043622 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-19 16:21:47.087705 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:21:47.087798 | orchestrator | 2025-09-19 16:21:47.087814 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-19 16:23:22.453527 | orchestrator | changed: [testbed-manager] 2025-09-19 16:23:22.453616 | orchestrator | 2025-09-19 16:23:22.453632 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 16:23:23.624645 | orchestrator | ok: [testbed-manager] 2025-09-19 16:23:23.624699 | orchestrator | 2025-09-19 16:23:23.624707 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:23:23.624714 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-19 16:23:23.624719 | orchestrator | 2025-09-19 16:23:24.046491 | orchestrator | ok: Runtime: 0:02:19.496247 2025-09-19 16:23:24.064019 | 2025-09-19 16:23:24.064213 | TASK [Reboot manager] 2025-09-19 16:23:25.601301 | orchestrator | ok: Runtime: 0:00:00.961268 2025-09-19 16:23:25.617521 | 2025-09-19 16:23:25.617670 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-19 16:23:40.662546 | orchestrator | ok 2025-09-19 16:23:40.673017 | 2025-09-19 16:23:40.673166 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-19 16:24:40.719423 | orchestrator | ok 2025-09-19 16:24:40.730789 | 2025-09-19 16:24:40.730955 | TASK [Deploy manager + bootstrap nodes] 2025-09-19 16:24:43.264834 | orchestrator | 2025-09-19 16:24:43.265037 | orchestrator | # DEPLOY MANAGER 2025-09-19 16:24:43.265061 | orchestrator | 2025-09-19 16:24:43.265078 | orchestrator | + set -e 2025-09-19 16:24:43.265092 | orchestrator | + echo 2025-09-19 16:24:43.265106 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-19 16:24:43.265125 | orchestrator | + echo 2025-09-19 16:24:43.265179 | orchestrator | + cat /opt/manager-vars.sh 2025-09-19 16:24:43.268759 | orchestrator | export NUMBER_OF_NODES=6 2025-09-19 16:24:43.268809 | orchestrator | 2025-09-19 16:24:43.268842 | orchestrator | export CEPH_VERSION=reef 2025-09-19 16:24:43.268866 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-19 16:24:43.268880 | orchestrator | export MANAGER_VERSION=latest 2025-09-19 16:24:43.268904 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-19 16:24:43.268916 | orchestrator | 2025-09-19 16:24:43.268935 | orchestrator | export ARA=false 2025-09-19 16:24:43.268946 | orchestrator | export DEPLOY_MODE=manager 2025-09-19 16:24:43.268964 | orchestrator | export TEMPEST=false 2025-09-19 16:24:43.268976 | orchestrator | export IS_ZUUL=true 2025-09-19 16:24:43.268987 | orchestrator | 2025-09-19 16:24:43.269004 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:24:43.269016 | orchestrator | export EXTERNAL_API=false 2025-09-19 16:24:43.269026 | orchestrator | 2025-09-19 16:24:43.269037 | orchestrator | export IMAGE_USER=ubuntu 2025-09-19 16:24:43.269051 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-19 16:24:43.269062 | orchestrator | 2025-09-19 16:24:43.269072 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-19 16:24:43.269310 | orchestrator | 2025-09-19 16:24:43.269329 | orchestrator | + echo 2025-09-19 16:24:43.269341 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 16:24:43.270461 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 16:24:43.270480 | orchestrator | ++ INTERACTIVE=false 2025-09-19 16:24:43.270529 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 16:24:43.270543 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 16:24:43.270929 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 16:24:43.270944 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 16:24:43.270955 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 16:24:43.271113 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 16:24:43.271129 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 16:24:43.271252 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 16:24:43.271267 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 16:24:43.271310 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 16:24:43.271321 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 16:24:43.271331 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 16:24:43.271396 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 16:24:43.271410 | orchestrator | ++ export ARA=false 2025-09-19 16:24:43.271421 | orchestrator | ++ ARA=false 2025-09-19 16:24:43.271431 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 16:24:43.271442 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 16:24:43.271457 | orchestrator | ++ export TEMPEST=false 2025-09-19 16:24:43.271468 | orchestrator | ++ TEMPEST=false 2025-09-19 16:24:43.271478 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 16:24:43.271498 | orchestrator | ++ IS_ZUUL=true 2025-09-19 16:24:43.271510 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:24:43.271521 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:24:43.271730 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 16:24:43.271830 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 16:24:43.271865 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 16:24:43.271877 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 16:24:43.271887 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 16:24:43.271897 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 16:24:43.271907 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 16:24:43.271917 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 16:24:43.271974 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-19 16:24:43.337391 | orchestrator | + docker version 2025-09-19 16:24:43.603864 | orchestrator | Client: Docker Engine - Community 2025-09-19 16:24:43.603951 | orchestrator | Version: 27.5.1 2025-09-19 16:24:43.603962 | orchestrator | API version: 1.47 2025-09-19 16:24:43.603970 | orchestrator | Go version: go1.22.11 2025-09-19 16:24:43.603976 | orchestrator | Git commit: 9f9e405 2025-09-19 16:24:43.603983 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 16:24:43.603991 | orchestrator | OS/Arch: linux/amd64 2025-09-19 16:24:43.603997 | orchestrator | Context: default 2025-09-19 16:24:43.604003 | orchestrator | 2025-09-19 16:24:43.604010 | orchestrator | Server: Docker Engine - Community 2025-09-19 16:24:43.604017 | orchestrator | Engine: 2025-09-19 16:24:43.604023 | orchestrator | Version: 27.5.1 2025-09-19 16:24:43.604030 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-19 16:24:43.604063 | orchestrator | Go version: go1.22.11 2025-09-19 16:24:43.604070 | orchestrator | Git commit: 4c9b3b0 2025-09-19 16:24:43.604076 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-19 16:24:43.604082 | orchestrator | OS/Arch: linux/amd64 2025-09-19 16:24:43.604088 | orchestrator | Experimental: false 2025-09-19 16:24:43.604095 | orchestrator | containerd: 2025-09-19 16:24:43.604101 | orchestrator | Version: 1.7.27 2025-09-19 16:24:43.604107 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-19 16:24:43.604114 | orchestrator | runc: 2025-09-19 16:24:43.604120 | orchestrator | Version: 1.2.5 2025-09-19 16:24:43.604126 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-19 16:24:43.604133 | orchestrator | docker-init: 2025-09-19 16:24:43.604139 | orchestrator | Version: 0.19.0 2025-09-19 16:24:43.604146 | orchestrator | GitCommit: de40ad0 2025-09-19 16:24:43.605764 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-19 16:24:43.615148 | orchestrator | + set -e 2025-09-19 16:24:43.615194 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 16:24:43.615201 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 16:24:43.615208 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 16:24:43.615214 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 16:24:43.615221 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 16:24:43.615227 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 16:24:43.615234 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 16:24:43.615241 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 16:24:43.615247 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 16:24:43.615253 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 16:24:43.615260 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 16:24:43.615266 | orchestrator | ++ export ARA=false 2025-09-19 16:24:43.615293 | orchestrator | ++ ARA=false 2025-09-19 16:24:43.615300 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 16:24:43.615306 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 16:24:43.615312 | orchestrator | ++ export TEMPEST=false 2025-09-19 16:24:43.615318 | orchestrator | ++ TEMPEST=false 2025-09-19 16:24:43.615324 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 16:24:43.615330 | orchestrator | ++ IS_ZUUL=true 2025-09-19 16:24:43.615336 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:24:43.615342 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:24:43.615349 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 16:24:43.615355 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 16:24:43.615360 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 16:24:43.615366 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 16:24:43.615373 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 16:24:43.615379 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 16:24:43.615385 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 16:24:43.615391 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 16:24:43.615397 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 16:24:43.615403 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 16:24:43.615409 | orchestrator | ++ INTERACTIVE=false 2025-09-19 16:24:43.615421 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 16:24:43.615431 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 16:24:43.615462 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 16:24:43.615474 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 16:24:43.615581 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-19 16:24:43.622825 | orchestrator | + set -e 2025-09-19 16:24:43.622853 | orchestrator | + VERSION=reef 2025-09-19 16:24:43.624031 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 16:24:43.630228 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-19 16:24:43.630288 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 16:24:43.633395 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-19 16:24:43.637451 | orchestrator | + set -e 2025-09-19 16:24:43.637489 | orchestrator | + VERSION=2024.2 2025-09-19 16:24:43.638061 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-19 16:24:43.641812 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-19 16:24:43.641848 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-19 16:24:43.646954 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-19 16:24:43.647862 | orchestrator | ++ semver latest 7.0.0 2025-09-19 16:24:43.704032 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 16:24:43.704126 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 16:24:43.704139 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-19 16:24:43.704152 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-19 16:24:43.800948 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 16:24:43.801737 | orchestrator | + source /opt/venv/bin/activate 2025-09-19 16:24:43.803077 | orchestrator | ++ deactivate nondestructive 2025-09-19 16:24:43.803097 | orchestrator | ++ '[' -n '' ']' 2025-09-19 16:24:43.803110 | orchestrator | ++ '[' -n '' ']' 2025-09-19 16:24:43.803121 | orchestrator | ++ hash -r 2025-09-19 16:24:43.803229 | orchestrator | ++ '[' -n '' ']' 2025-09-19 16:24:43.803245 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-19 16:24:43.803256 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-19 16:24:43.803267 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-19 16:24:43.803633 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-19 16:24:43.803740 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-19 16:24:43.803762 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-19 16:24:43.803776 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-19 16:24:43.803799 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 16:24:43.803812 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 16:24:43.803823 | orchestrator | ++ export PATH 2025-09-19 16:24:43.803834 | orchestrator | ++ '[' -n '' ']' 2025-09-19 16:24:43.804015 | orchestrator | ++ '[' -z '' ']' 2025-09-19 16:24:43.804033 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-19 16:24:43.804051 | orchestrator | ++ PS1='(venv) ' 2025-09-19 16:24:43.804062 | orchestrator | ++ export PS1 2025-09-19 16:24:43.804077 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-19 16:24:43.804095 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-19 16:24:43.804110 | orchestrator | ++ hash -r 2025-09-19 16:24:43.804354 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-19 16:24:44.999522 | orchestrator | 2025-09-19 16:24:44.999633 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-19 16:24:44.999649 | orchestrator | 2025-09-19 16:24:44.999661 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 16:24:45.590518 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:45.590628 | orchestrator | 2025-09-19 16:24:45.590643 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 16:24:46.619826 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:46.619939 | orchestrator | 2025-09-19 16:24:46.619954 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-19 16:24:46.619964 | orchestrator | 2025-09-19 16:24:46.619972 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:24:49.121703 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:49.121837 | orchestrator | 2025-09-19 16:24:49.121861 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-19 16:24:49.183826 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:49.183941 | orchestrator | 2025-09-19 16:24:49.183959 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-19 16:24:49.660423 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:49.660540 | orchestrator | 2025-09-19 16:24:49.660557 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-19 16:24:49.702555 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:24:49.702656 | orchestrator | 2025-09-19 16:24:49.702672 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-19 16:24:50.049796 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:50.049887 | orchestrator | 2025-09-19 16:24:50.049898 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-19 16:24:50.096763 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:24:50.096868 | orchestrator | 2025-09-19 16:24:50.096883 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-19 16:24:50.451668 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:50.451763 | orchestrator | 2025-09-19 16:24:50.451777 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-19 16:24:50.575026 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:24:50.575127 | orchestrator | 2025-09-19 16:24:50.575161 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-19 16:24:50.575184 | orchestrator | 2025-09-19 16:24:50.575206 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:24:52.392152 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:52.392256 | orchestrator | 2025-09-19 16:24:52.392325 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-19 16:24:52.491204 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-19 16:24:52.491321 | orchestrator | 2025-09-19 16:24:52.491337 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-19 16:24:52.551387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-19 16:24:52.551462 | orchestrator | 2025-09-19 16:24:52.551478 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-19 16:24:53.676432 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-19 16:24:53.676535 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-19 16:24:53.676549 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-19 16:24:53.676560 | orchestrator | 2025-09-19 16:24:53.676573 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-19 16:24:55.578458 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-19 16:24:55.578543 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-19 16:24:55.578553 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-19 16:24:55.578561 | orchestrator | 2025-09-19 16:24:55.578569 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-19 16:24:56.239508 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:24:56.239610 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:56.239625 | orchestrator | 2025-09-19 16:24:56.239638 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-19 16:24:56.874145 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:24:56.874318 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:56.874347 | orchestrator | 2025-09-19 16:24:56.874369 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-19 16:24:56.932324 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:24:56.932402 | orchestrator | 2025-09-19 16:24:56.932411 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-19 16:24:57.312790 | orchestrator | ok: [testbed-manager] 2025-09-19 16:24:57.312889 | orchestrator | 2025-09-19 16:24:57.312905 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-19 16:24:57.395890 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-19 16:24:57.395986 | orchestrator | 2025-09-19 16:24:57.396000 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-19 16:24:58.473990 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:58.474137 | orchestrator | 2025-09-19 16:24:58.474152 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-19 16:24:59.271229 | orchestrator | changed: [testbed-manager] 2025-09-19 16:24:59.271374 | orchestrator | 2025-09-19 16:24:59.271390 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-19 16:25:10.086070 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:10.086145 | orchestrator | 2025-09-19 16:25:10.086156 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-19 16:25:10.149125 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:25:10.149193 | orchestrator | 2025-09-19 16:25:10.149205 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-19 16:25:10.149216 | orchestrator | 2025-09-19 16:25:10.149225 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:25:11.990238 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:11.990319 | orchestrator | 2025-09-19 16:25:11.990343 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-19 16:25:12.105046 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-19 16:25:12.105120 | orchestrator | 2025-09-19 16:25:12.105133 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-19 16:25:12.163544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:25:12.163614 | orchestrator | 2025-09-19 16:25:12.163627 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-19 16:25:14.799797 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:14.799886 | orchestrator | 2025-09-19 16:25:14.799898 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-19 16:25:14.850157 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:14.850231 | orchestrator | 2025-09-19 16:25:14.850243 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-19 16:25:14.982717 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-19 16:25:14.982805 | orchestrator | 2025-09-19 16:25:14.982817 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-19 16:25:17.986964 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-19 16:25:17.987073 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-19 16:25:17.987086 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-19 16:25:17.987097 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-19 16:25:17.987107 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-19 16:25:17.987117 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-19 16:25:17.987126 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-19 16:25:17.987136 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-19 16:25:17.987146 | orchestrator | 2025-09-19 16:25:17.987157 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-19 16:25:18.648597 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:18.648702 | orchestrator | 2025-09-19 16:25:18.648717 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-19 16:25:19.287501 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:19.287603 | orchestrator | 2025-09-19 16:25:19.287619 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-19 16:25:19.373467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-19 16:25:19.373566 | orchestrator | 2025-09-19 16:25:19.373581 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-19 16:25:20.609416 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-19 16:25:20.609523 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-19 16:25:20.609539 | orchestrator | 2025-09-19 16:25:20.609554 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-19 16:25:21.252066 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:21.252166 | orchestrator | 2025-09-19 16:25:21.252184 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-19 16:25:21.303746 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:25:21.303831 | orchestrator | 2025-09-19 16:25:21.303846 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-19 16:25:21.382795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-19 16:25:21.382892 | orchestrator | 2025-09-19 16:25:21.382907 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-19 16:25:22.025357 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:22.025457 | orchestrator | 2025-09-19 16:25:22.025471 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-19 16:25:22.093530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-19 16:25:22.093683 | orchestrator | 2025-09-19 16:25:22.093705 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-19 16:25:23.489088 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:25:23.489181 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:25:23.489193 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:23.489204 | orchestrator | 2025-09-19 16:25:23.489215 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-19 16:25:24.156529 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:24.156622 | orchestrator | 2025-09-19 16:25:24.156636 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-19 16:25:24.203060 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:25:24.203154 | orchestrator | 2025-09-19 16:25:24.203169 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-19 16:25:24.291701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-19 16:25:24.291800 | orchestrator | 2025-09-19 16:25:24.291824 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-19 16:25:24.837087 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:24.837185 | orchestrator | 2025-09-19 16:25:24.837200 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-19 16:25:25.260439 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:25.260535 | orchestrator | 2025-09-19 16:25:25.260550 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-19 16:25:26.575660 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-19 16:25:26.575753 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-19 16:25:26.575764 | orchestrator | 2025-09-19 16:25:26.575775 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-19 16:25:27.247246 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:27.247434 | orchestrator | 2025-09-19 16:25:27.247451 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-19 16:25:27.641506 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:27.641606 | orchestrator | 2025-09-19 16:25:27.641621 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-19 16:25:28.012467 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:28.012566 | orchestrator | 2025-09-19 16:25:28.012581 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-19 16:25:28.049042 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:25:28.049117 | orchestrator | 2025-09-19 16:25:28.049130 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-19 16:25:28.118385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-19 16:25:28.118456 | orchestrator | 2025-09-19 16:25:28.118469 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-19 16:25:28.172653 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:28.172727 | orchestrator | 2025-09-19 16:25:28.172741 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-19 16:25:30.252816 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-19 16:25:30.252946 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-19 16:25:30.252972 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-19 16:25:30.252992 | orchestrator | 2025-09-19 16:25:30.253013 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-19 16:25:31.008795 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:31.008895 | orchestrator | 2025-09-19 16:25:31.008911 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-19 16:25:31.751524 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:31.751639 | orchestrator | 2025-09-19 16:25:31.751663 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-19 16:25:32.528208 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:32.528433 | orchestrator | 2025-09-19 16:25:32.528467 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-19 16:25:32.597017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-19 16:25:32.597136 | orchestrator | 2025-09-19 16:25:32.597162 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-19 16:25:32.649992 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:32.650133 | orchestrator | 2025-09-19 16:25:32.650149 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-19 16:25:33.382681 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-19 16:25:33.382777 | orchestrator | 2025-09-19 16:25:33.382791 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-19 16:25:33.476987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-19 16:25:33.477079 | orchestrator | 2025-09-19 16:25:33.477092 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-19 16:25:34.209347 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:34.209467 | orchestrator | 2025-09-19 16:25:34.209485 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-19 16:25:34.791578 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:34.791685 | orchestrator | 2025-09-19 16:25:34.791701 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-19 16:25:34.849410 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:25:34.849497 | orchestrator | 2025-09-19 16:25:34.849510 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-19 16:25:34.907454 | orchestrator | ok: [testbed-manager] 2025-09-19 16:25:34.907550 | orchestrator | 2025-09-19 16:25:34.907565 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-19 16:25:35.755970 | orchestrator | changed: [testbed-manager] 2025-09-19 16:25:35.756068 | orchestrator | 2025-09-19 16:25:35.756084 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-19 16:26:42.650831 | orchestrator | changed: [testbed-manager] 2025-09-19 16:26:42.650949 | orchestrator | 2025-09-19 16:26:42.650966 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-19 16:26:43.615043 | orchestrator | ok: [testbed-manager] 2025-09-19 16:26:43.615146 | orchestrator | 2025-09-19 16:26:43.615163 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-19 16:26:43.672769 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:26:43.672857 | orchestrator | 2025-09-19 16:26:43.672874 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-19 16:26:46.344002 | orchestrator | changed: [testbed-manager] 2025-09-19 16:26:46.344094 | orchestrator | 2025-09-19 16:26:46.344109 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-19 16:26:46.398723 | orchestrator | ok: [testbed-manager] 2025-09-19 16:26:46.398818 | orchestrator | 2025-09-19 16:26:46.398834 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 16:26:46.398848 | orchestrator | 2025-09-19 16:26:46.398859 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-19 16:26:46.444483 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:26:46.444548 | orchestrator | 2025-09-19 16:26:46.444561 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-19 16:27:46.497478 | orchestrator | Pausing for 60 seconds 2025-09-19 16:27:46.497591 | orchestrator | changed: [testbed-manager] 2025-09-19 16:27:46.497607 | orchestrator | 2025-09-19 16:27:46.497620 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-19 16:27:50.647411 | orchestrator | changed: [testbed-manager] 2025-09-19 16:27:50.647519 | orchestrator | 2025-09-19 16:27:50.647535 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-19 16:28:32.418929 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-19 16:28:32.419023 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-19 16:28:32.419033 | orchestrator | changed: [testbed-manager] 2025-09-19 16:28:32.419061 | orchestrator | 2025-09-19 16:28:32.419070 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-19 16:28:42.378374 | orchestrator | changed: [testbed-manager] 2025-09-19 16:28:42.378486 | orchestrator | 2025-09-19 16:28:42.378503 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-19 16:28:42.499734 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-19 16:28:42.499836 | orchestrator | 2025-09-19 16:28:42.499851 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-19 16:28:42.499863 | orchestrator | 2025-09-19 16:28:42.499874 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-19 16:28:42.547570 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:28:42.547648 | orchestrator | 2025-09-19 16:28:42.547662 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:28:42.547675 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-19 16:28:42.547686 | orchestrator | 2025-09-19 16:28:42.657490 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-19 16:28:42.657582 | orchestrator | + deactivate 2025-09-19 16:28:42.657595 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-19 16:28:42.657607 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-19 16:28:42.657617 | orchestrator | + export PATH 2025-09-19 16:28:42.657627 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-19 16:28:42.657636 | orchestrator | + '[' -n '' ']' 2025-09-19 16:28:42.657646 | orchestrator | + hash -r 2025-09-19 16:28:42.657677 | orchestrator | + '[' -n '' ']' 2025-09-19 16:28:42.657688 | orchestrator | + unset VIRTUAL_ENV 2025-09-19 16:28:42.657698 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-19 16:28:42.657708 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-19 16:28:42.657717 | orchestrator | + unset -f deactivate 2025-09-19 16:28:42.657728 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-19 16:28:42.665883 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 16:28:42.665906 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 16:28:42.665916 | orchestrator | + local max_attempts=60 2025-09-19 16:28:42.665926 | orchestrator | + local name=ceph-ansible 2025-09-19 16:28:42.665935 | orchestrator | + local attempt_num=1 2025-09-19 16:28:42.667068 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:28:42.700501 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:28:42.700573 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 16:28:42.700587 | orchestrator | + local max_attempts=60 2025-09-19 16:28:42.700599 | orchestrator | + local name=kolla-ansible 2025-09-19 16:28:42.700608 | orchestrator | + local attempt_num=1 2025-09-19 16:28:42.701490 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 16:28:42.735594 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:28:42.735679 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 16:28:42.735694 | orchestrator | + local max_attempts=60 2025-09-19 16:28:42.735705 | orchestrator | + local name=osism-ansible 2025-09-19 16:28:42.735716 | orchestrator | + local attempt_num=1 2025-09-19 16:28:42.736489 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 16:28:42.766211 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:28:42.766261 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 16:28:42.766273 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 16:28:43.434726 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-19 16:28:43.634258 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-19 16:28:43.634392 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634410 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634437 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-19 16:28:43.634446 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-19 16:28:43.634461 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634469 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634476 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-09-19 16:28:43.634483 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634490 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-19 16:28:43.634497 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634504 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-19 16:28:43.634511 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634517 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-19 16:28:43.634524 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.634530 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-19 16:28:43.642571 | orchestrator | ++ semver latest 7.0.0 2025-09-19 16:28:43.703105 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 16:28:43.703186 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 16:28:43.703201 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-19 16:28:43.707177 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-19 16:28:55.849168 | orchestrator | 2025-09-19 16:28:55 | INFO  | Task adc443a8-3dee-4359-82dc-de06d8f3c35c (resolvconf) was prepared for execution. 2025-09-19 16:28:55.849284 | orchestrator | 2025-09-19 16:28:55 | INFO  | It takes a moment until task adc443a8-3dee-4359-82dc-de06d8f3c35c (resolvconf) has been started and output is visible here. 2025-09-19 16:29:08.935785 | orchestrator | 2025-09-19 16:29:08.935900 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-19 16:29:08.935916 | orchestrator | 2025-09-19 16:29:08.935928 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:29:08.935973 | orchestrator | Friday 19 September 2025 16:28:59 +0000 (0:00:00.134) 0:00:00.134 ****** 2025-09-19 16:29:08.935993 | orchestrator | ok: [testbed-manager] 2025-09-19 16:29:08.936012 | orchestrator | 2025-09-19 16:29:08.936031 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 16:29:08.936051 | orchestrator | Friday 19 September 2025 16:29:03 +0000 (0:00:03.581) 0:00:03.716 ****** 2025-09-19 16:29:08.936069 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:29:08.936089 | orchestrator | 2025-09-19 16:29:08.936101 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 16:29:08.936112 | orchestrator | Friday 19 September 2025 16:29:03 +0000 (0:00:00.068) 0:00:03.784 ****** 2025-09-19 16:29:08.936123 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-19 16:29:08.936136 | orchestrator | 2025-09-19 16:29:08.936147 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 16:29:08.936158 | orchestrator | Friday 19 September 2025 16:29:03 +0000 (0:00:00.084) 0:00:03.869 ****** 2025-09-19 16:29:08.936169 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:29:08.936179 | orchestrator | 2025-09-19 16:29:08.936190 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 16:29:08.936201 | orchestrator | Friday 19 September 2025 16:29:03 +0000 (0:00:00.070) 0:00:03.939 ****** 2025-09-19 16:29:08.936211 | orchestrator | ok: [testbed-manager] 2025-09-19 16:29:08.936222 | orchestrator | 2025-09-19 16:29:08.936233 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 16:29:08.936244 | orchestrator | Friday 19 September 2025 16:29:04 +0000 (0:00:01.066) 0:00:05.005 ****** 2025-09-19 16:29:08.936254 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:29:08.936265 | orchestrator | 2025-09-19 16:29:08.936275 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 16:29:08.936286 | orchestrator | Friday 19 September 2025 16:29:04 +0000 (0:00:00.077) 0:00:05.083 ****** 2025-09-19 16:29:08.936297 | orchestrator | ok: [testbed-manager] 2025-09-19 16:29:08.936309 | orchestrator | 2025-09-19 16:29:08.936322 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 16:29:08.936334 | orchestrator | Friday 19 September 2025 16:29:04 +0000 (0:00:00.467) 0:00:05.550 ****** 2025-09-19 16:29:08.936346 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:29:08.936358 | orchestrator | 2025-09-19 16:29:08.936402 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 16:29:08.936416 | orchestrator | Friday 19 September 2025 16:29:04 +0000 (0:00:00.080) 0:00:05.631 ****** 2025-09-19 16:29:08.936428 | orchestrator | changed: [testbed-manager] 2025-09-19 16:29:08.936440 | orchestrator | 2025-09-19 16:29:08.936452 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 16:29:08.936465 | orchestrator | Friday 19 September 2025 16:29:05 +0000 (0:00:00.524) 0:00:06.156 ****** 2025-09-19 16:29:08.936477 | orchestrator | changed: [testbed-manager] 2025-09-19 16:29:08.936489 | orchestrator | 2025-09-19 16:29:08.936501 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 16:29:08.936513 | orchestrator | Friday 19 September 2025 16:29:06 +0000 (0:00:01.073) 0:00:07.230 ****** 2025-09-19 16:29:08.936525 | orchestrator | ok: [testbed-manager] 2025-09-19 16:29:08.936537 | orchestrator | 2025-09-19 16:29:08.936549 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 16:29:08.936561 | orchestrator | Friday 19 September 2025 16:29:07 +0000 (0:00:00.975) 0:00:08.206 ****** 2025-09-19 16:29:08.936587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-19 16:29:08.936609 | orchestrator | 2025-09-19 16:29:08.936622 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 16:29:08.936634 | orchestrator | Friday 19 September 2025 16:29:07 +0000 (0:00:00.080) 0:00:08.286 ****** 2025-09-19 16:29:08.936646 | orchestrator | changed: [testbed-manager] 2025-09-19 16:29:08.936656 | orchestrator | 2025-09-19 16:29:08.936667 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:29:08.936679 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 16:29:08.936690 | orchestrator | 2025-09-19 16:29:08.936701 | orchestrator | 2025-09-19 16:29:08.936712 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:29:08.936723 | orchestrator | Friday 19 September 2025 16:29:08 +0000 (0:00:01.129) 0:00:09.415 ****** 2025-09-19 16:29:08.936734 | orchestrator | =============================================================================== 2025-09-19 16:29:08.936744 | orchestrator | Gathering Facts --------------------------------------------------------- 3.58s 2025-09-19 16:29:08.936755 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.13s 2025-09-19 16:29:08.936766 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.07s 2025-09-19 16:29:08.936777 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2025-09-19 16:29:08.936787 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2025-09-19 16:29:08.936798 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-19 16:29:08.936827 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-09-19 16:29:08.936838 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-09-19 16:29:08.936849 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-19 16:29:08.936860 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-09-19 16:29:08.936871 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.08s 2025-09-19 16:29:08.936881 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-09-19 16:29:08.936892 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-19 16:29:09.211515 | orchestrator | + osism apply sshconfig 2025-09-19 16:29:21.241336 | orchestrator | 2025-09-19 16:29:21 | INFO  | Task ff6f1d4f-a23d-4bc3-b5df-6054cb90a109 (sshconfig) was prepared for execution. 2025-09-19 16:29:21.241473 | orchestrator | 2025-09-19 16:29:21 | INFO  | It takes a moment until task ff6f1d4f-a23d-4bc3-b5df-6054cb90a109 (sshconfig) has been started and output is visible here. 2025-09-19 16:29:32.753186 | orchestrator | 2025-09-19 16:29:32.753297 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-19 16:29:32.753313 | orchestrator | 2025-09-19 16:29:32.753325 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-19 16:29:32.753336 | orchestrator | Friday 19 September 2025 16:29:25 +0000 (0:00:00.160) 0:00:00.160 ****** 2025-09-19 16:29:32.753347 | orchestrator | ok: [testbed-manager] 2025-09-19 16:29:32.753360 | orchestrator | 2025-09-19 16:29:32.753370 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-19 16:29:32.753427 | orchestrator | Friday 19 September 2025 16:29:25 +0000 (0:00:00.585) 0:00:00.746 ****** 2025-09-19 16:29:32.753439 | orchestrator | changed: [testbed-manager] 2025-09-19 16:29:32.753451 | orchestrator | 2025-09-19 16:29:32.753462 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-19 16:29:32.753474 | orchestrator | Friday 19 September 2025 16:29:26 +0000 (0:00:00.503) 0:00:01.249 ****** 2025-09-19 16:29:32.753485 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-19 16:29:32.753496 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-19 16:29:32.753535 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-19 16:29:32.753546 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-19 16:29:32.753556 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-19 16:29:32.753586 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-19 16:29:32.753598 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-19 16:29:32.753608 | orchestrator | 2025-09-19 16:29:32.753619 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-19 16:29:32.753630 | orchestrator | Friday 19 September 2025 16:29:31 +0000 (0:00:05.685) 0:00:06.935 ****** 2025-09-19 16:29:32.753641 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:29:32.753651 | orchestrator | 2025-09-19 16:29:32.753662 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-19 16:29:32.753673 | orchestrator | Friday 19 September 2025 16:29:31 +0000 (0:00:00.066) 0:00:07.001 ****** 2025-09-19 16:29:32.753683 | orchestrator | changed: [testbed-manager] 2025-09-19 16:29:32.753694 | orchestrator | 2025-09-19 16:29:32.753704 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:29:32.753717 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:29:32.753728 | orchestrator | 2025-09-19 16:29:32.753740 | orchestrator | 2025-09-19 16:29:32.753752 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:29:32.753764 | orchestrator | Friday 19 September 2025 16:29:32 +0000 (0:00:00.599) 0:00:07.600 ****** 2025-09-19 16:29:32.753777 | orchestrator | =============================================================================== 2025-09-19 16:29:32.753789 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.69s 2025-09-19 16:29:32.753800 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.60s 2025-09-19 16:29:32.753812 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-09-19 16:29:32.753823 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-09-19 16:29:32.753835 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-09-19 16:29:33.041212 | orchestrator | + osism apply known-hosts 2025-09-19 16:29:44.977101 | orchestrator | 2025-09-19 16:29:44 | INFO  | Task 64f1e260-4046-46a3-bdfe-b88f5f3d997b (known-hosts) was prepared for execution. 2025-09-19 16:29:44.977211 | orchestrator | 2025-09-19 16:29:44 | INFO  | It takes a moment until task 64f1e260-4046-46a3-bdfe-b88f5f3d997b (known-hosts) has been started and output is visible here. 2025-09-19 16:30:01.497676 | orchestrator | 2025-09-19 16:30:01.497790 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-19 16:30:01.497806 | orchestrator | 2025-09-19 16:30:01.497817 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-19 16:30:01.497829 | orchestrator | Friday 19 September 2025 16:29:48 +0000 (0:00:00.170) 0:00:00.170 ****** 2025-09-19 16:30:01.497842 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 16:30:01.497853 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 16:30:01.497864 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 16:30:01.497875 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 16:30:01.497885 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 16:30:01.497896 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 16:30:01.497907 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 16:30:01.497917 | orchestrator | 2025-09-19 16:30:01.497929 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-19 16:30:01.497941 | orchestrator | Friday 19 September 2025 16:29:54 +0000 (0:00:06.005) 0:00:06.175 ****** 2025-09-19 16:30:01.497977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 16:30:01.497990 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 16:30:01.498001 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 16:30:01.498011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 16:30:01.498085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 16:30:01.498108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 16:30:01.498119 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 16:30:01.498130 | orchestrator | 2025-09-19 16:30:01.498141 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498152 | orchestrator | Friday 19 September 2025 16:29:55 +0000 (0:00:00.173) 0:00:06.349 ****** 2025-09-19 16:30:01.498167 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4f7chFuV2oRt4nrVhP5Z4MA0Y/iqqLyDtXZIQurd5RU314rG6AF/ou1nqw4dBXPq7LQ7NrZrnl9U2hc9wKfw7fI09Z2Dy/Y17YyuSFdHZIeVWIJvahi6QoM0/dwj/9QshS58U+uAflnjdZFsoSWqI2SNeQFO3YhW7Z7RCymkS7x2BLi3qoxRjMe1Kt1j3q0ybK4qU/nQ/po7Nh/x8PdJrC+D70i1r/mBYNB++qRkMM2d4txz1myUnQytxDjOV9TIEGv3k4rvf9/TknBgBLzm/KF7dOtCJHVrUyeNAVR/4DSMmjhh+d72IuV8LramXbecDbV7hBpvW+1M5kQz/PoGxYZjLWsNGFqfWbF/aC2Pa/zBWi5qocqDVSxdrwga5g+E3bZ3jds63FHOGwBJWAkMBr3bEr3WAXGZ68QqEeGyXozZxE/Kh4MhjTdliEvpgpnj12Q4HboV0tT0q3RZr4FgJq3LtoJgNbj2CVLn4apUS93qfIe27tihRsr+hEt8AmT0=) 2025-09-19 16:30:01.498182 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD9xUCjMwBtR7ei9lKnzvKiV4RAfU6HqZzlYjbAkli0gzIhaQy4uv19K7C9hzvUQStiwikPsJxpffpnJ9CaC2Ug=) 2025-09-19 16:30:01.498196 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAOmsjGDiENO9mNa5ak3EqsHKaYI5fv6cVXXzbmL56Qh) 2025-09-19 16:30:01.498208 | orchestrator | 2025-09-19 16:30:01.498222 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498236 | orchestrator | Friday 19 September 2025 16:29:56 +0000 (0:00:01.168) 0:00:07.517 ****** 2025-09-19 16:30:01.498265 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfNoX1p36j7jXqHDyj+QvuTR62Bh9ZwdD9dE097rU5V6YkKX00cM12FdiA/0rk1Rox/KTl2V+iXmFoLwYFEYlnMLzU2X0gwT00VAStNsoKp7687SBufPEJPavD8i7YMtw/hWfongcdrvy4clsm/KLfzKM7OZSaD4L+h+AiuCMWK9lWU9B73JgTE2zRfjLcYaGfXpXXI86LVWfXqS/3cebXfYwN4nLDWhn1qKZecHZlTNzRIP9ip0/AV6JXxkQBsMQky497/1y117jrLZJmD2ns6P+cImQwcaHfT0Eiugb8s0BgxfLrcAlbpHUWWrR+93VFBKfoD/mpiwwJ2NDr+na669cSe1uUtsg1KkEigvffXrzlDpf6I1Gr8o9TgFzN8fqMjzrAIK6jrhUeG2wgpmtccRis4niHjjkg+FIL0tMpvzIMpmuzfvWH2rdWnyXregw6V+IFO7/Qn5CuoTqLWqlBJ7BzdoNgq3sQXQraP4VxC2YKMo19JwblZpcXxBP28nc=) 2025-09-19 16:30:01.498279 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERtiOfAd/ZDGEa6kN2g1mAqcgVLtVda5ujolLuHg8qjECtVNuMOSkbxfC06zc4zL70oiPBDsntEPscrAlRb0OE=) 2025-09-19 16:30:01.498292 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINI6VnWACyPrqOWm2RRouTHEWG+0mJsvDnsMqkRj31Lk) 2025-09-19 16:30:01.498312 | orchestrator | 2025-09-19 16:30:01.498325 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498338 | orchestrator | Friday 19 September 2025 16:29:57 +0000 (0:00:01.082) 0:00:08.600 ****** 2025-09-19 16:30:01.498350 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuAfipnhl9p9FrOg2GSkOwYK9k7Vm3dYWSuKYnqDcA0O2xbHws3fLuJBAePU23XTEFQQI7QbuIOypW7wm/J3GlM1NqDz1J7WsErfe8a9f+ZFwjB/9aF69RNCKJt/61bDayWBl3XmuF2YOudgPJpCoEecaz67cBdATrHSz0k62nDzd7h9w+O0hRGFGVFuBlmxaNkDkh8ntJS5SCwD8+67UrsFZ6xnfOTFjA+i4iOKhF+jcnkhp8BcMr+N8oWIu41puOtA/YKoU4dB3S/EowedZhny+eDq+d+SfAlYWIQ9u7Z0z11xxW3p7wrtO6WaxaqcIC4UgZg+gtOAPIIrzT/swDbhuRiSJcNiYhdnmLRb1uqzatE6e2b9mFefKSle+bsFXXYE+VMxlISywt9PZMMtZ75iD/mMN+y7NHHYQAsoMocwXNAZwL1QyboSbOgAnTj24W3YgE6ZFKz9JRqky7aFGlllc+bo2KUmlbDW6hTw8qUHXpaaIO2xlHX13mIzgcitU=) 2025-09-19 16:30:01.498363 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGRLTgK92P5fEqwZxgJBOCu2LOIv23jhilfymkWLdB32TM5R2pLieEfdp71kty9/dlBp3mW8RmDK2VUFdnzN3As=) 2025-09-19 16:30:01.498375 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFZgmc4TdXPW+4cIIuT1uOITrVKyol2hG1mAQOWnoDtE) 2025-09-19 16:30:01.498388 | orchestrator | 2025-09-19 16:30:01.498460 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498476 | orchestrator | Friday 19 September 2025 16:29:58 +0000 (0:00:01.081) 0:00:09.682 ****** 2025-09-19 16:30:01.498557 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL8m864QCbJFCf8JLGv6dtERlKNoctm+Mv/VuJVSQVhxtmGu74C3v0SWNl7QRfg8LYBaitS+ECvsSx++HaFOKkk=) 2025-09-19 16:30:01.498572 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZG5A31Rhid/oUI2rWZVDMZy6L3wk1Pek+SY2WJhJJfZ8ob+ny9d8kjkKbwsRB/w+Duy9Ed1/BpD8b8vL9gSGU4hcyrKw0X/BbaH/lejJ6xcik3U+2ZyC3DqXZGReWNfDoH4Rq+w7obR7WrrV6Wyz0TkO9PdpREs4pjE7aU9ju9FFIvcKoEgUaXv1Fuwbs4Mkxv8sr93K7robnHsPixQzl/vxgLxiAVNostlUXbQW44FbIHuF9d8qVfPVbfRKFl3Cm3ibTGORLeo6jpfRq39/8g8aAOytmv69hV9+6SQJZqjC5+CiFnm1fzPlFgIOLFaMTLAvCCT/euT9JTA9lTzvIOmdgHzQgS/0bn48bWfx4BB39UUODbdy+41ef66RcqakWaYget1D42xRj1np3mzN0vWQJcwoZVn9KTT90pN5kj/PS38ZQ+Csy92rSbEMLZnqyIvVyRw2Ymswpms3BVvUrYMtcO1AB3Tw5LlAOU5WWnwBJnOROkNHoPKZJpH3XSf8=) 2025-09-19 16:30:01.498584 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJVhABGbn8IktYEMuvHQd7BYsyMv1d8ytxrNslWTpRd) 2025-09-19 16:30:01.498595 | orchestrator | 2025-09-19 16:30:01.498606 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498617 | orchestrator | Friday 19 September 2025 16:29:59 +0000 (0:00:00.994) 0:00:10.676 ****** 2025-09-19 16:30:01.498628 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0RY7C41++QqFoWO7+/aVEsJbq0D+NHhkz7XanfagDL2Clkt46t68iAfCRF8NWrbFPnMWWJ3dLJbxlWCpBN9PuM0Q3uN8OzQl3Djpq3AZ5Hk8k7qcAYlqSS6KwYbSqOzGq5AkkSqca1l3lI6Cf9nXU4cXIgCQLLBVlPqR90+8tfvtZkJcOZXdeA1ZRsmM6tvXAlBR6stPrKsRz4474BE/ZubDAikWVuFHN/j1Tbzt8FF0xgRZiDT8Yf1I+F1vqTTEJMPsZ29kza8ubZWQ8QzCUgp6zratAzUjP5VwBdJusExvkb5RuyPCt0HnM6n16n1nPGFp9nFF3R6Dhb+vm3rq2YI5LYHr1OrxBqtZNVARKT3IatMADiHmVLd4TIB1grdlA7eB8SCRcimTgty+//E02LBcHqXBm8PsV1Rz82ouECbLs6YlqFs2VXyp6hHBZ5MLIdl87NtJwQ8uCesUVmY+2eDOAV/A3KsIbj/L32A0VFxPQ7L0mkP8DzYhKbt5y6oE=) 2025-09-19 16:30:01.498639 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG714OLnz1ALErpkndALuIwMy65hmUGnc0M5uFudEozWxXaXkT8vZaUGXMfe+ttZ7rXv8xkfa7ddPVQYC0pyAio=) 2025-09-19 16:30:01.498650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIiav3taE/6d9FspPkwgqEGrPthlLcCJ1DAUovZA6Uo/) 2025-09-19 16:30:01.498669 | orchestrator | 2025-09-19 16:30:01.498680 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:01.498691 | orchestrator | Friday 19 September 2025 16:30:00 +0000 (0:00:01.048) 0:00:11.725 ****** 2025-09-19 16:30:01.498712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs7i6UDCBpqbBL1XTwYKJl9LGwApn/vQ+syrIHw84nvw4Bnck49+FWDORTXFE8r8QcX+3z8jdEH4D52zr5FVKqcRy6WP/VugyPp+V5ZjCVeDbKo7RCJTNaTKm4dmtDTaDhcfpI90IRPBZaWSUV2IrOjPvllbw6K7/z0ayM6uP12c/GtR8rRAw5R8alKjJnPIukJqOSqvmEZm+qKtPHOExv7tU2nGD7cJs9wKjxawCU+bW7sZYH3ILpcw3V+2i8v2PYXyCbwtjt9JV2QQXzYGmBHePU7Wkf9vGrozAwu1g2MoNGaF679Qwd8OjETWZ2Hf7l5srSU6B2EqYh6Ruv2BXEWZgTzvgsGFHvWeIpF0BnUkPSH6BO9OW08bolZXOSH+HwBnWU+6jIfcc9S/U8clJ33p1bRN0NBPOS1bTFg3rtthXtRjlM251Ksa4nAAxwywefnJO+LloeZYfJgiYi06UwBP0AhNvy4V+E7uL2kBFnf6ZIcj6N+ihyGcUTX2mArAs=) 2025-09-19 16:30:12.329931 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNtgJ2ArL9PeUylt1hwRo0rAQag6UVXLXH4pa79XWmeGbrv9VWIPpjh748cVXY4y7Nfvn0w+KgK/btVdHaafyU=) 2025-09-19 16:30:12.330096 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO5d8ng9iehtNELdZ9+f5xkX6joKKMaIrisdRq33pa9) 2025-09-19 16:30:12.330112 | orchestrator | 2025-09-19 16:30:12.330123 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:12.330861 | orchestrator | Friday 19 September 2025 16:30:01 +0000 (0:00:01.058) 0:00:12.783 ****** 2025-09-19 16:30:12.330882 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7YifN3AytFUvsjrDzED6f96xr/eDkCf8W7acqBq6vJ5ryrCYe483GJq8sn0Cb51EsebSmWw7GQjK0k9myTGIsulnFSRH9pdOVKhE9Dv2YR4Rj+hLw4UgKXHxcBpUmZlByOecNWUGY8/c1AF36ez8NJfZ8gjYaJILUriW2irQmDWyTpdNGoxJEEixosCdHTQPTJvvtJiqRn7qv5Bi4fLTjIXm8r+NR3aQkaV0/kjV4SroJgUUYYcmclMdyV7AN8LGXUcYWLON3CZa4LaRIGiYKQv3kspVeRneHtPO5va9SyPR+lg27LXFXMtcTDWeZU3Aglf+Sszw0kYGpLiGl0o5/Gzmu144CviTt5YJX3qT1jTBUcCmvQ50f8lDJa3GLs3k+cYxhPO9DLGrtONJhLI1lFW6rzhxI3JBCxwqBIOndu6CveI5qdi7BgAv/0m3nFmKI6Y/3sdcNxRlZKPFc4xyKBGZxF8pbXnN0cKM6/lA3ilpLkRcmJI9Zq3g6s2OHPYU=) 2025-09-19 16:30:12.330895 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWH5+hXnySQOzHx+wFhi/basl/F3ZAu+p8SOT2sVDEiMrXy2Sk/mjbiSJMvOn5oUxNwdMyDarCTgO3McobHnCA=) 2025-09-19 16:30:12.330904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFdvrreyXe+fpRh1ngmv17ltbGRt+vCQMMzMuyDj8G8F) 2025-09-19 16:30:12.330913 | orchestrator | 2025-09-19 16:30:12.330922 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-19 16:30:12.330931 | orchestrator | Friday 19 September 2025 16:30:02 +0000 (0:00:01.105) 0:00:13.889 ****** 2025-09-19 16:30:12.330941 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-19 16:30:12.330966 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-19 16:30:12.330974 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-19 16:30:12.330983 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-19 16:30:12.330992 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-19 16:30:12.331000 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-19 16:30:12.331008 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-19 16:30:12.331017 | orchestrator | 2025-09-19 16:30:12.331026 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-19 16:30:12.331035 | orchestrator | Friday 19 September 2025 16:30:07 +0000 (0:00:05.204) 0:00:19.093 ****** 2025-09-19 16:30:12.331045 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-19 16:30:12.331056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-19 16:30:12.331086 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-19 16:30:12.331094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-19 16:30:12.331103 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-19 16:30:12.331111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-19 16:30:12.331120 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-19 16:30:12.331128 | orchestrator | 2025-09-19 16:30:12.331137 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:12.331146 | orchestrator | Friday 19 September 2025 16:30:07 +0000 (0:00:00.172) 0:00:19.266 ****** 2025-09-19 16:30:12.331154 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAOmsjGDiENO9mNa5ak3EqsHKaYI5fv6cVXXzbmL56Qh) 2025-09-19 16:30:12.331185 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4f7chFuV2oRt4nrVhP5Z4MA0Y/iqqLyDtXZIQurd5RU314rG6AF/ou1nqw4dBXPq7LQ7NrZrnl9U2hc9wKfw7fI09Z2Dy/Y17YyuSFdHZIeVWIJvahi6QoM0/dwj/9QshS58U+uAflnjdZFsoSWqI2SNeQFO3YhW7Z7RCymkS7x2BLi3qoxRjMe1Kt1j3q0ybK4qU/nQ/po7Nh/x8PdJrC+D70i1r/mBYNB++qRkMM2d4txz1myUnQytxDjOV9TIEGv3k4rvf9/TknBgBLzm/KF7dOtCJHVrUyeNAVR/4DSMmjhh+d72IuV8LramXbecDbV7hBpvW+1M5kQz/PoGxYZjLWsNGFqfWbF/aC2Pa/zBWi5qocqDVSxdrwga5g+E3bZ3jds63FHOGwBJWAkMBr3bEr3WAXGZ68QqEeGyXozZxE/Kh4MhjTdliEvpgpnj12Q4HboV0tT0q3RZr4FgJq3LtoJgNbj2CVLn4apUS93qfIe27tihRsr+hEt8AmT0=) 2025-09-19 16:30:12.331195 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBD9xUCjMwBtR7ei9lKnzvKiV4RAfU6HqZzlYjbAkli0gzIhaQy4uv19K7C9hzvUQStiwikPsJxpffpnJ9CaC2Ug=) 2025-09-19 16:30:12.331203 | orchestrator | 2025-09-19 16:30:12.331212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:12.331221 | orchestrator | Friday 19 September 2025 16:30:09 +0000 (0:00:01.121) 0:00:20.387 ****** 2025-09-19 16:30:12.331229 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINI6VnWACyPrqOWm2RRouTHEWG+0mJsvDnsMqkRj31Lk) 2025-09-19 16:30:12.331238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfNoX1p36j7jXqHDyj+QvuTR62Bh9ZwdD9dE097rU5V6YkKX00cM12FdiA/0rk1Rox/KTl2V+iXmFoLwYFEYlnMLzU2X0gwT00VAStNsoKp7687SBufPEJPavD8i7YMtw/hWfongcdrvy4clsm/KLfzKM7OZSaD4L+h+AiuCMWK9lWU9B73JgTE2zRfjLcYaGfXpXXI86LVWfXqS/3cebXfYwN4nLDWhn1qKZecHZlTNzRIP9ip0/AV6JXxkQBsMQky497/1y117jrLZJmD2ns6P+cImQwcaHfT0Eiugb8s0BgxfLrcAlbpHUWWrR+93VFBKfoD/mpiwwJ2NDr+na669cSe1uUtsg1KkEigvffXrzlDpf6I1Gr8o9TgFzN8fqMjzrAIK6jrhUeG2wgpmtccRis4niHjjkg+FIL0tMpvzIMpmuzfvWH2rdWnyXregw6V+IFO7/Qn5CuoTqLWqlBJ7BzdoNgq3sQXQraP4VxC2YKMo19JwblZpcXxBP28nc=) 2025-09-19 16:30:12.331247 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBERtiOfAd/ZDGEa6kN2g1mAqcgVLtVda5ujolLuHg8qjECtVNuMOSkbxfC06zc4zL70oiPBDsntEPscrAlRb0OE=) 2025-09-19 16:30:12.331256 | orchestrator | 2025-09-19 16:30:12.331264 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:12.331273 | orchestrator | Friday 19 September 2025 16:30:10 +0000 (0:00:01.065) 0:00:21.453 ****** 2025-09-19 16:30:12.331288 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuAfipnhl9p9FrOg2GSkOwYK9k7Vm3dYWSuKYnqDcA0O2xbHws3fLuJBAePU23XTEFQQI7QbuIOypW7wm/J3GlM1NqDz1J7WsErfe8a9f+ZFwjB/9aF69RNCKJt/61bDayWBl3XmuF2YOudgPJpCoEecaz67cBdATrHSz0k62nDzd7h9w+O0hRGFGVFuBlmxaNkDkh8ntJS5SCwD8+67UrsFZ6xnfOTFjA+i4iOKhF+jcnkhp8BcMr+N8oWIu41puOtA/YKoU4dB3S/EowedZhny+eDq+d+SfAlYWIQ9u7Z0z11xxW3p7wrtO6WaxaqcIC4UgZg+gtOAPIIrzT/swDbhuRiSJcNiYhdnmLRb1uqzatE6e2b9mFefKSle+bsFXXYE+VMxlISywt9PZMMtZ75iD/mMN+y7NHHYQAsoMocwXNAZwL1QyboSbOgAnTj24W3YgE6ZFKz9JRqky7aFGlllc+bo2KUmlbDW6hTw8qUHXpaaIO2xlHX13mIzgcitU=) 2025-09-19 16:30:12.331302 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGRLTgK92P5fEqwZxgJBOCu2LOIv23jhilfymkWLdB32TM5R2pLieEfdp71kty9/dlBp3mW8RmDK2VUFdnzN3As=) 2025-09-19 16:30:12.331311 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFZgmc4TdXPW+4cIIuT1uOITrVKyol2hG1mAQOWnoDtE) 2025-09-19 16:30:12.331320 | orchestrator | 2025-09-19 16:30:12.331328 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:12.331337 | orchestrator | Friday 19 September 2025 16:30:11 +0000 (0:00:01.067) 0:00:22.520 ****** 2025-09-19 16:30:12.331346 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCZG5A31Rhid/oUI2rWZVDMZy6L3wk1Pek+SY2WJhJJfZ8ob+ny9d8kjkKbwsRB/w+Duy9Ed1/BpD8b8vL9gSGU4hcyrKw0X/BbaH/lejJ6xcik3U+2ZyC3DqXZGReWNfDoH4Rq+w7obR7WrrV6Wyz0TkO9PdpREs4pjE7aU9ju9FFIvcKoEgUaXv1Fuwbs4Mkxv8sr93K7robnHsPixQzl/vxgLxiAVNostlUXbQW44FbIHuF9d8qVfPVbfRKFl3Cm3ibTGORLeo6jpfRq39/8g8aAOytmv69hV9+6SQJZqjC5+CiFnm1fzPlFgIOLFaMTLAvCCT/euT9JTA9lTzvIOmdgHzQgS/0bn48bWfx4BB39UUODbdy+41ef66RcqakWaYget1D42xRj1np3mzN0vWQJcwoZVn9KTT90pN5kj/PS38ZQ+Csy92rSbEMLZnqyIvVyRw2Ymswpms3BVvUrYMtcO1AB3Tw5LlAOU5WWnwBJnOROkNHoPKZJpH3XSf8=) 2025-09-19 16:30:12.331355 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL8m864QCbJFCf8JLGv6dtERlKNoctm+Mv/VuJVSQVhxtmGu74C3v0SWNl7QRfg8LYBaitS+ECvsSx++HaFOKkk=) 2025-09-19 16:30:12.331373 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKJVhABGbn8IktYEMuvHQd7BYsyMv1d8ytxrNslWTpRd) 2025-09-19 16:30:16.558298 | orchestrator | 2025-09-19 16:30:16.558395 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:16.558470 | orchestrator | Friday 19 September 2025 16:30:12 +0000 (0:00:01.092) 0:00:23.613 ****** 2025-09-19 16:30:16.558488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0RY7C41++QqFoWO7+/aVEsJbq0D+NHhkz7XanfagDL2Clkt46t68iAfCRF8NWrbFPnMWWJ3dLJbxlWCpBN9PuM0Q3uN8OzQl3Djpq3AZ5Hk8k7qcAYlqSS6KwYbSqOzGq5AkkSqca1l3lI6Cf9nXU4cXIgCQLLBVlPqR90+8tfvtZkJcOZXdeA1ZRsmM6tvXAlBR6stPrKsRz4474BE/ZubDAikWVuFHN/j1Tbzt8FF0xgRZiDT8Yf1I+F1vqTTEJMPsZ29kza8ubZWQ8QzCUgp6zratAzUjP5VwBdJusExvkb5RuyPCt0HnM6n16n1nPGFp9nFF3R6Dhb+vm3rq2YI5LYHr1OrxBqtZNVARKT3IatMADiHmVLd4TIB1grdlA7eB8SCRcimTgty+//E02LBcHqXBm8PsV1Rz82ouECbLs6YlqFs2VXyp6hHBZ5MLIdl87NtJwQ8uCesUVmY+2eDOAV/A3KsIbj/L32A0VFxPQ7L0mkP8DzYhKbt5y6oE=) 2025-09-19 16:30:16.558504 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG714OLnz1ALErpkndALuIwMy65hmUGnc0M5uFudEozWxXaXkT8vZaUGXMfe+ttZ7rXv8xkfa7ddPVQYC0pyAio=) 2025-09-19 16:30:16.558517 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIiav3taE/6d9FspPkwgqEGrPthlLcCJ1DAUovZA6Uo/) 2025-09-19 16:30:16.558529 | orchestrator | 2025-09-19 16:30:16.558540 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:16.558551 | orchestrator | Friday 19 September 2025 16:30:13 +0000 (0:00:01.092) 0:00:24.705 ****** 2025-09-19 16:30:16.558562 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHO5d8ng9iehtNELdZ9+f5xkX6joKKMaIrisdRq33pa9) 2025-09-19 16:30:16.558596 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCs7i6UDCBpqbBL1XTwYKJl9LGwApn/vQ+syrIHw84nvw4Bnck49+FWDORTXFE8r8QcX+3z8jdEH4D52zr5FVKqcRy6WP/VugyPp+V5ZjCVeDbKo7RCJTNaTKm4dmtDTaDhcfpI90IRPBZaWSUV2IrOjPvllbw6K7/z0ayM6uP12c/GtR8rRAw5R8alKjJnPIukJqOSqvmEZm+qKtPHOExv7tU2nGD7cJs9wKjxawCU+bW7sZYH3ILpcw3V+2i8v2PYXyCbwtjt9JV2QQXzYGmBHePU7Wkf9vGrozAwu1g2MoNGaF679Qwd8OjETWZ2Hf7l5srSU6B2EqYh6Ruv2BXEWZgTzvgsGFHvWeIpF0BnUkPSH6BO9OW08bolZXOSH+HwBnWU+6jIfcc9S/U8clJ33p1bRN0NBPOS1bTFg3rtthXtRjlM251Ksa4nAAxwywefnJO+LloeZYfJgiYi06UwBP0AhNvy4V+E7uL2kBFnf6ZIcj6N+ihyGcUTX2mArAs=) 2025-09-19 16:30:16.558608 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGNtgJ2ArL9PeUylt1hwRo0rAQag6UVXLXH4pa79XWmeGbrv9VWIPpjh748cVXY4y7Nfvn0w+KgK/btVdHaafyU=) 2025-09-19 16:30:16.558619 | orchestrator | 2025-09-19 16:30:16.558630 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-19 16:30:16.558641 | orchestrator | Friday 19 September 2025 16:30:14 +0000 (0:00:01.039) 0:00:25.745 ****** 2025-09-19 16:30:16.558652 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQD7YifN3AytFUvsjrDzED6f96xr/eDkCf8W7acqBq6vJ5ryrCYe483GJq8sn0Cb51EsebSmWw7GQjK0k9myTGIsulnFSRH9pdOVKhE9Dv2YR4Rj+hLw4UgKXHxcBpUmZlByOecNWUGY8/c1AF36ez8NJfZ8gjYaJILUriW2irQmDWyTpdNGoxJEEixosCdHTQPTJvvtJiqRn7qv5Bi4fLTjIXm8r+NR3aQkaV0/kjV4SroJgUUYYcmclMdyV7AN8LGXUcYWLON3CZa4LaRIGiYKQv3kspVeRneHtPO5va9SyPR+lg27LXFXMtcTDWeZU3Aglf+Sszw0kYGpLiGl0o5/Gzmu144CviTt5YJX3qT1jTBUcCmvQ50f8lDJa3GLs3k+cYxhPO9DLGrtONJhLI1lFW6rzhxI3JBCxwqBIOndu6CveI5qdi7BgAv/0m3nFmKI6Y/3sdcNxRlZKPFc4xyKBGZxF8pbXnN0cKM6/lA3ilpLkRcmJI9Zq3g6s2OHPYU=) 2025-09-19 16:30:16.558663 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEWH5+hXnySQOzHx+wFhi/basl/F3ZAu+p8SOT2sVDEiMrXy2Sk/mjbiSJMvOn5oUxNwdMyDarCTgO3McobHnCA=) 2025-09-19 16:30:16.558674 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFdvrreyXe+fpRh1ngmv17ltbGRt+vCQMMzMuyDj8G8F) 2025-09-19 16:30:16.558685 | orchestrator | 2025-09-19 16:30:16.558696 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-19 16:30:16.558707 | orchestrator | Friday 19 September 2025 16:30:15 +0000 (0:00:01.049) 0:00:26.795 ****** 2025-09-19 16:30:16.558718 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 16:30:16.558729 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 16:30:16.558740 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 16:30:16.558751 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 16:30:16.558761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 16:30:16.558772 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 16:30:16.558783 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 16:30:16.558794 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:30:16.558806 | orchestrator | 2025-09-19 16:30:16.558832 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-19 16:30:16.558845 | orchestrator | Friday 19 September 2025 16:30:15 +0000 (0:00:00.173) 0:00:26.968 ****** 2025-09-19 16:30:16.558858 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:30:16.558870 | orchestrator | 2025-09-19 16:30:16.558882 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-19 16:30:16.558910 | orchestrator | Friday 19 September 2025 16:30:15 +0000 (0:00:00.063) 0:00:27.031 ****** 2025-09-19 16:30:16.558923 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:30:16.558935 | orchestrator | 2025-09-19 16:30:16.558947 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-19 16:30:16.558959 | orchestrator | Friday 19 September 2025 16:30:15 +0000 (0:00:00.057) 0:00:27.089 ****** 2025-09-19 16:30:16.558978 | orchestrator | changed: [testbed-manager] 2025-09-19 16:30:16.558990 | orchestrator | 2025-09-19 16:30:16.559002 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:30:16.559014 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 16:30:16.559026 | orchestrator | 2025-09-19 16:30:16.559038 | orchestrator | 2025-09-19 16:30:16.559050 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:30:16.559063 | orchestrator | Friday 19 September 2025 16:30:16 +0000 (0:00:00.508) 0:00:27.597 ****** 2025-09-19 16:30:16.559075 | orchestrator | =============================================================================== 2025-09-19 16:30:16.559087 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.01s 2025-09-19 16:30:16.559100 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.20s 2025-09-19 16:30:16.559111 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-09-19 16:30:16.559122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2025-09-19 16:30:16.559133 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-09-19 16:30:16.559143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-19 16:30:16.559154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-09-19 16:30:16.559164 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 16:30:16.559175 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2025-09-19 16:30:16.559185 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 16:30:16.559196 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-09-19 16:30:16.559206 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-19 16:30:16.559217 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 16:30:16.559228 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-19 16:30:16.559238 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-19 16:30:16.559249 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-09-19 16:30:16.559259 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.51s 2025-09-19 16:30:16.559270 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-09-19 16:30:16.559280 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-09-19 16:30:16.559291 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-09-19 16:30:16.858079 | orchestrator | + osism apply squid 2025-09-19 16:30:28.887594 | orchestrator | 2025-09-19 16:30:28 | INFO  | Task c8e49c3a-8d80-4cac-b27c-31a3e35ccf24 (squid) was prepared for execution. 2025-09-19 16:30:28.887688 | orchestrator | 2025-09-19 16:30:28 | INFO  | It takes a moment until task c8e49c3a-8d80-4cac-b27c-31a3e35ccf24 (squid) has been started and output is visible here. 2025-09-19 16:32:21.906266 | orchestrator | 2025-09-19 16:32:21.906387 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-19 16:32:21.906402 | orchestrator | 2025-09-19 16:32:21.906414 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-19 16:32:21.906426 | orchestrator | Friday 19 September 2025 16:30:32 +0000 (0:00:00.163) 0:00:00.163 ****** 2025-09-19 16:32:21.906437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:32:21.906449 | orchestrator | 2025-09-19 16:32:21.906542 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-19 16:32:21.906584 | orchestrator | Friday 19 September 2025 16:30:32 +0000 (0:00:00.084) 0:00:00.248 ****** 2025-09-19 16:32:21.906597 | orchestrator | ok: [testbed-manager] 2025-09-19 16:32:21.906609 | orchestrator | 2025-09-19 16:32:21.906620 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-19 16:32:21.906631 | orchestrator | Friday 19 September 2025 16:30:34 +0000 (0:00:01.392) 0:00:01.640 ****** 2025-09-19 16:32:21.906642 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-19 16:32:21.906653 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-19 16:32:21.906664 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-19 16:32:21.906675 | orchestrator | 2025-09-19 16:32:21.906685 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-19 16:32:21.906696 | orchestrator | Friday 19 September 2025 16:30:35 +0000 (0:00:01.130) 0:00:02.770 ****** 2025-09-19 16:32:21.906707 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-19 16:32:21.906718 | orchestrator | 2025-09-19 16:32:21.906728 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-19 16:32:21.906739 | orchestrator | Friday 19 September 2025 16:30:36 +0000 (0:00:01.028) 0:00:03.799 ****** 2025-09-19 16:32:21.906750 | orchestrator | ok: [testbed-manager] 2025-09-19 16:32:21.906760 | orchestrator | 2025-09-19 16:32:21.906771 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-19 16:32:21.906781 | orchestrator | Friday 19 September 2025 16:30:36 +0000 (0:00:00.359) 0:00:04.158 ****** 2025-09-19 16:32:21.906792 | orchestrator | changed: [testbed-manager] 2025-09-19 16:32:21.906803 | orchestrator | 2025-09-19 16:32:21.906816 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-19 16:32:21.906828 | orchestrator | Friday 19 September 2025 16:30:37 +0000 (0:00:00.955) 0:00:05.113 ****** 2025-09-19 16:32:21.906840 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-19 16:32:21.906852 | orchestrator | ok: [testbed-manager] 2025-09-19 16:32:21.906864 | orchestrator | 2025-09-19 16:32:21.906876 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-19 16:32:21.906888 | orchestrator | Friday 19 September 2025 16:31:08 +0000 (0:00:31.167) 0:00:36.281 ****** 2025-09-19 16:32:21.906900 | orchestrator | changed: [testbed-manager] 2025-09-19 16:32:21.906912 | orchestrator | 2025-09-19 16:32:21.906924 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-19 16:32:21.906935 | orchestrator | Friday 19 September 2025 16:31:20 +0000 (0:00:12.002) 0:00:48.284 ****** 2025-09-19 16:32:21.906947 | orchestrator | Pausing for 60 seconds 2025-09-19 16:32:21.906958 | orchestrator | changed: [testbed-manager] 2025-09-19 16:32:21.906969 | orchestrator | 2025-09-19 16:32:21.906979 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-19 16:32:21.906990 | orchestrator | Friday 19 September 2025 16:32:20 +0000 (0:01:00.068) 0:01:48.352 ****** 2025-09-19 16:32:21.907001 | orchestrator | ok: [testbed-manager] 2025-09-19 16:32:21.907012 | orchestrator | 2025-09-19 16:32:21.907022 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-19 16:32:21.907033 | orchestrator | Friday 19 September 2025 16:32:21 +0000 (0:00:00.075) 0:01:48.427 ****** 2025-09-19 16:32:21.907044 | orchestrator | changed: [testbed-manager] 2025-09-19 16:32:21.907054 | orchestrator | 2025-09-19 16:32:21.907065 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:32:21.907076 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:32:21.907087 | orchestrator | 2025-09-19 16:32:21.907097 | orchestrator | 2025-09-19 16:32:21.907108 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:32:21.907118 | orchestrator | Friday 19 September 2025 16:32:21 +0000 (0:00:00.623) 0:01:49.051 ****** 2025-09-19 16:32:21.907141 | orchestrator | =============================================================================== 2025-09-19 16:32:21.907157 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-19 16:32:21.907176 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.17s 2025-09-19 16:32:21.907188 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2025-09-19 16:32:21.907198 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.39s 2025-09-19 16:32:21.907209 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.13s 2025-09-19 16:32:21.907220 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2025-09-19 16:32:21.907252 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.96s 2025-09-19 16:32:21.907263 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-09-19 16:32:21.907274 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.36s 2025-09-19 16:32:21.907285 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.08s 2025-09-19 16:32:21.907295 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-09-19 16:32:22.174560 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 16:32:22.175226 | orchestrator | ++ semver latest 9.0.0 2025-09-19 16:32:22.235830 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-19 16:32:22.235909 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 16:32:22.236607 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-19 16:32:34.236426 | orchestrator | 2025-09-19 16:32:34 | INFO  | Task c989805b-0068-4a36-9ebf-d2771eaf6470 (operator) was prepared for execution. 2025-09-19 16:32:34.236587 | orchestrator | 2025-09-19 16:32:34 | INFO  | It takes a moment until task c989805b-0068-4a36-9ebf-d2771eaf6470 (operator) has been started and output is visible here. 2025-09-19 16:32:49.966094 | orchestrator | 2025-09-19 16:32:49.966210 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-19 16:32:49.966226 | orchestrator | 2025-09-19 16:32:49.966238 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 16:32:49.966250 | orchestrator | Friday 19 September 2025 16:32:38 +0000 (0:00:00.149) 0:00:00.149 ****** 2025-09-19 16:32:49.966262 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:32:49.966274 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:32:49.966285 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:32:49.966295 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:32:49.966306 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:32:49.966316 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:32:49.966327 | orchestrator | 2025-09-19 16:32:49.966338 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-19 16:32:49.966349 | orchestrator | Friday 19 September 2025 16:32:41 +0000 (0:00:03.431) 0:00:03.581 ****** 2025-09-19 16:32:49.966360 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:32:49.966370 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:32:49.966381 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:32:49.966392 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:32:49.966402 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:32:49.966413 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:32:49.966423 | orchestrator | 2025-09-19 16:32:49.966434 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-19 16:32:49.966444 | orchestrator | 2025-09-19 16:32:49.966455 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-19 16:32:49.966489 | orchestrator | Friday 19 September 2025 16:32:42 +0000 (0:00:00.738) 0:00:04.319 ****** 2025-09-19 16:32:49.966500 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:32:49.966511 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:32:49.966521 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:32:49.966532 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:32:49.966542 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:32:49.966553 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:32:49.966586 | orchestrator | 2025-09-19 16:32:49.966599 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-19 16:32:49.966611 | orchestrator | Friday 19 September 2025 16:32:42 +0000 (0:00:00.160) 0:00:04.480 ****** 2025-09-19 16:32:49.966623 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:32:49.966634 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:32:49.966646 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:32:49.966661 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:32:49.966680 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:32:49.966696 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:32:49.966724 | orchestrator | 2025-09-19 16:32:49.966743 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-19 16:32:49.966760 | orchestrator | Friday 19 September 2025 16:32:42 +0000 (0:00:00.162) 0:00:04.642 ****** 2025-09-19 16:32:49.966778 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:49.966797 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:49.966815 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:49.966834 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:49.966851 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:49.966870 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:49.966888 | orchestrator | 2025-09-19 16:32:49.966907 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-19 16:32:49.966925 | orchestrator | Friday 19 September 2025 16:32:43 +0000 (0:00:00.587) 0:00:05.229 ****** 2025-09-19 16:32:49.966943 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:49.966962 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:49.966981 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:49.967000 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:49.967019 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:49.967037 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:49.967056 | orchestrator | 2025-09-19 16:32:49.967074 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-19 16:32:49.967092 | orchestrator | Friday 19 September 2025 16:32:43 +0000 (0:00:00.802) 0:00:06.032 ****** 2025-09-19 16:32:49.967111 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-19 16:32:49.967130 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-19 16:32:49.967148 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-19 16:32:49.967167 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-19 16:32:49.967186 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-19 16:32:49.967205 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-19 16:32:49.967222 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-19 16:32:49.967240 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-19 16:32:49.967259 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-19 16:32:49.967277 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-19 16:32:49.967295 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-19 16:32:49.967313 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-19 16:32:49.967332 | orchestrator | 2025-09-19 16:32:49.967351 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-19 16:32:49.967369 | orchestrator | Friday 19 September 2025 16:32:45 +0000 (0:00:01.225) 0:00:07.257 ****** 2025-09-19 16:32:49.967387 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:49.967406 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:49.967424 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:49.967443 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:49.967462 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:49.967537 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:49.967558 | orchestrator | 2025-09-19 16:32:49.967578 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-19 16:32:49.967599 | orchestrator | Friday 19 September 2025 16:32:46 +0000 (0:00:01.364) 0:00:08.622 ****** 2025-09-19 16:32:49.967619 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-19 16:32:49.967654 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-19 16:32:49.967675 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-19 16:32:49.967696 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967739 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967761 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967781 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967802 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967821 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-19 16:32:49.967842 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967863 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967884 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967905 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967925 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967947 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-19 16:32:49.967965 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.967984 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.968005 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.968026 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.968045 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.968066 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-19 16:32:49.968086 | orchestrator | 2025-09-19 16:32:49.968104 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-19 16:32:49.968116 | orchestrator | Friday 19 September 2025 16:32:47 +0000 (0:00:01.269) 0:00:09.891 ****** 2025-09-19 16:32:49.968127 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:49.968138 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:49.968149 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:49.968159 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:49.968170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:49.968181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:49.968195 | orchestrator | 2025-09-19 16:32:49.968213 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-19 16:32:49.968230 | orchestrator | Friday 19 September 2025 16:32:48 +0000 (0:00:00.166) 0:00:10.058 ****** 2025-09-19 16:32:49.968249 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:49.968267 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:49.968284 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:49.968303 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:49.968321 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:49.968340 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:49.968358 | orchestrator | 2025-09-19 16:32:49.968376 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-19 16:32:49.968393 | orchestrator | Friday 19 September 2025 16:32:48 +0000 (0:00:00.571) 0:00:10.629 ****** 2025-09-19 16:32:49.968411 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:49.968430 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:49.968449 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:49.968461 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:49.968534 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:49.968546 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:49.968557 | orchestrator | 2025-09-19 16:32:49.968597 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-19 16:32:49.968609 | orchestrator | Friday 19 September 2025 16:32:48 +0000 (0:00:00.165) 0:00:10.795 ****** 2025-09-19 16:32:49.968620 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 16:32:49.968635 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 16:32:49.968647 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:49.968657 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 16:32:49.968668 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:49.968679 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 16:32:49.968690 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:49.968700 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:49.968711 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 16:32:49.968722 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:49.968733 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 16:32:49.968743 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:49.968754 | orchestrator | 2025-09-19 16:32:49.968765 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-19 16:32:49.968776 | orchestrator | Friday 19 September 2025 16:32:49 +0000 (0:00:00.737) 0:00:11.533 ****** 2025-09-19 16:32:49.968787 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:49.968796 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:49.968806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:49.968820 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:49.968830 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:49.968839 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:49.968849 | orchestrator | 2025-09-19 16:32:49.968858 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-19 16:32:49.968868 | orchestrator | Friday 19 September 2025 16:32:49 +0000 (0:00:00.150) 0:00:11.684 ****** 2025-09-19 16:32:49.968877 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:49.968887 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:49.968896 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:49.968906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:49.968915 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:49.968924 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:49.968934 | orchestrator | 2025-09-19 16:32:49.968943 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-19 16:32:49.968953 | orchestrator | Friday 19 September 2025 16:32:49 +0000 (0:00:00.157) 0:00:11.841 ****** 2025-09-19 16:32:49.968963 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:49.968972 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:49.968982 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:49.968992 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:49.969010 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:51.136100 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:51.136201 | orchestrator | 2025-09-19 16:32:51.136216 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-19 16:32:51.136230 | orchestrator | Friday 19 September 2025 16:32:49 +0000 (0:00:00.151) 0:00:11.993 ****** 2025-09-19 16:32:51.136241 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:32:51.136252 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:32:51.136263 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:32:51.136274 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:32:51.136286 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:32:51.136297 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:32:51.136308 | orchestrator | 2025-09-19 16:32:51.136319 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-19 16:32:51.136331 | orchestrator | Friday 19 September 2025 16:32:50 +0000 (0:00:00.707) 0:00:12.700 ****** 2025-09-19 16:32:51.136342 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:32:51.136353 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:32:51.136364 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:32:51.136400 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:32:51.136411 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:32:51.136422 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:32:51.136433 | orchestrator | 2025-09-19 16:32:51.136444 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:32:51.136456 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136528 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136540 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136551 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136562 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136572 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:32:51.136583 | orchestrator | 2025-09-19 16:32:51.136594 | orchestrator | 2025-09-19 16:32:51.136604 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:32:51.136615 | orchestrator | Friday 19 September 2025 16:32:50 +0000 (0:00:00.245) 0:00:12.946 ****** 2025-09-19 16:32:51.136626 | orchestrator | =============================================================================== 2025-09-19 16:32:51.136637 | orchestrator | Gathering Facts --------------------------------------------------------- 3.43s 2025-09-19 16:32:51.136647 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2025-09-19 16:32:51.136658 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-09-19 16:32:51.136670 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.23s 2025-09-19 16:32:51.136680 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.80s 2025-09-19 16:32:51.136691 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-09-19 16:32:51.136701 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-09-19 16:32:51.136712 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.71s 2025-09-19 16:32:51.136722 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-09-19 16:32:51.136733 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.57s 2025-09-19 16:32:51.136744 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.25s 2025-09-19 16:32:51.136755 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.17s 2025-09-19 16:32:51.136766 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.17s 2025-09-19 16:32:51.136777 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-09-19 16:32:51.136787 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-09-19 16:32:51.136797 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-09-19 16:32:51.136808 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.15s 2025-09-19 16:32:51.136819 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-09-19 16:32:51.425430 | orchestrator | + osism apply --environment custom facts 2025-09-19 16:32:53.277120 | orchestrator | 2025-09-19 16:32:53 | INFO  | Trying to run play facts in environment custom 2025-09-19 16:33:03.353582 | orchestrator | 2025-09-19 16:33:03 | INFO  | Task ed407a9e-46fd-4781-91ae-a21b143aab0e (facts) was prepared for execution. 2025-09-19 16:33:03.353696 | orchestrator | 2025-09-19 16:33:03 | INFO  | It takes a moment until task ed407a9e-46fd-4781-91ae-a21b143aab0e (facts) has been started and output is visible here. 2025-09-19 16:33:48.857250 | orchestrator | 2025-09-19 16:33:48.857368 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-19 16:33:48.857384 | orchestrator | 2025-09-19 16:33:48.857396 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 16:33:48.857407 | orchestrator | Friday 19 September 2025 16:33:07 +0000 (0:00:00.085) 0:00:00.085 ****** 2025-09-19 16:33:48.857418 | orchestrator | ok: [testbed-manager] 2025-09-19 16:33:48.857430 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.857442 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:33:48.857452 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.857463 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.857473 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:33:48.857484 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:33:48.857547 | orchestrator | 2025-09-19 16:33:48.857559 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-19 16:33:48.857570 | orchestrator | Friday 19 September 2025 16:33:08 +0000 (0:00:01.365) 0:00:01.451 ****** 2025-09-19 16:33:48.857581 | orchestrator | ok: [testbed-manager] 2025-09-19 16:33:48.857592 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:33:48.857603 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:33:48.857614 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.857624 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.857635 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.857651 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:33:48.857669 | orchestrator | 2025-09-19 16:33:48.857688 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-19 16:33:48.857708 | orchestrator | 2025-09-19 16:33:48.857728 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 16:33:48.857747 | orchestrator | Friday 19 September 2025 16:33:09 +0000 (0:00:01.120) 0:00:02.571 ****** 2025-09-19 16:33:48.857765 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.857783 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.857803 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.857823 | orchestrator | 2025-09-19 16:33:48.857841 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 16:33:48.857862 | orchestrator | Friday 19 September 2025 16:33:09 +0000 (0:00:00.106) 0:00:02.678 ****** 2025-09-19 16:33:48.857882 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.857902 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.857921 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.857942 | orchestrator | 2025-09-19 16:33:48.857963 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 16:33:48.857984 | orchestrator | Friday 19 September 2025 16:33:09 +0000 (0:00:00.197) 0:00:02.875 ****** 2025-09-19 16:33:48.858006 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.858141 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.858164 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.858182 | orchestrator | 2025-09-19 16:33:48.858201 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 16:33:48.858219 | orchestrator | Friday 19 September 2025 16:33:10 +0000 (0:00:00.189) 0:00:03.065 ****** 2025-09-19 16:33:48.858238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:33:48.858256 | orchestrator | 2025-09-19 16:33:48.858273 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 16:33:48.858291 | orchestrator | Friday 19 September 2025 16:33:10 +0000 (0:00:00.165) 0:00:03.230 ****** 2025-09-19 16:33:48.858343 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.858381 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.858399 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.858418 | orchestrator | 2025-09-19 16:33:48.858435 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 16:33:48.858454 | orchestrator | Friday 19 September 2025 16:33:10 +0000 (0:00:00.458) 0:00:03.688 ****** 2025-09-19 16:33:48.858473 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:33:48.858521 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:33:48.858543 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:33:48.858561 | orchestrator | 2025-09-19 16:33:48.858579 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 16:33:48.858597 | orchestrator | Friday 19 September 2025 16:33:10 +0000 (0:00:00.138) 0:00:03.827 ****** 2025-09-19 16:33:48.858615 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.858633 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.858651 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.858669 | orchestrator | 2025-09-19 16:33:48.858687 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 16:33:48.858706 | orchestrator | Friday 19 September 2025 16:33:11 +0000 (0:00:00.966) 0:00:04.794 ****** 2025-09-19 16:33:48.858718 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.858728 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.858739 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.858750 | orchestrator | 2025-09-19 16:33:48.858761 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 16:33:48.858780 | orchestrator | Friday 19 September 2025 16:33:12 +0000 (0:00:00.418) 0:00:05.212 ****** 2025-09-19 16:33:48.858791 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.858802 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.858813 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.858824 | orchestrator | 2025-09-19 16:33:48.858834 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 16:33:48.858845 | orchestrator | Friday 19 September 2025 16:33:13 +0000 (0:00:00.952) 0:00:06.164 ****** 2025-09-19 16:33:48.858856 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.858866 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.858876 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.858887 | orchestrator | 2025-09-19 16:33:48.858898 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-19 16:33:48.858909 | orchestrator | Friday 19 September 2025 16:33:30 +0000 (0:00:17.207) 0:00:23.371 ****** 2025-09-19 16:33:48.858919 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:33:48.858930 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:33:48.858941 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:33:48.858951 | orchestrator | 2025-09-19 16:33:48.858962 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-19 16:33:48.858995 | orchestrator | Friday 19 September 2025 16:33:30 +0000 (0:00:00.119) 0:00:23.491 ****** 2025-09-19 16:33:48.859006 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:33:48.859017 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:33:48.859028 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:33:48.859038 | orchestrator | 2025-09-19 16:33:48.859049 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-19 16:33:48.859060 | orchestrator | Friday 19 September 2025 16:33:38 +0000 (0:00:08.006) 0:00:31.497 ****** 2025-09-19 16:33:48.859071 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.859082 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.859092 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.859103 | orchestrator | 2025-09-19 16:33:48.859114 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-19 16:33:48.859124 | orchestrator | Friday 19 September 2025 16:33:39 +0000 (0:00:00.433) 0:00:31.931 ****** 2025-09-19 16:33:48.859135 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-19 16:33:48.859159 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-19 16:33:48.859170 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-19 16:33:48.859180 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-19 16:33:48.859191 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-19 16:33:48.859202 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-19 16:33:48.859212 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-19 16:33:48.859222 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-19 16:33:48.859233 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-19 16:33:48.859244 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-19 16:33:48.859254 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-19 16:33:48.859265 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-19 16:33:48.859276 | orchestrator | 2025-09-19 16:33:48.859286 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 16:33:48.859297 | orchestrator | Friday 19 September 2025 16:33:42 +0000 (0:00:03.429) 0:00:35.360 ****** 2025-09-19 16:33:48.859308 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.859318 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.859329 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.859340 | orchestrator | 2025-09-19 16:33:48.859351 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 16:33:48.859361 | orchestrator | 2025-09-19 16:33:48.859372 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:33:48.859383 | orchestrator | Friday 19 September 2025 16:33:43 +0000 (0:00:01.300) 0:00:36.660 ****** 2025-09-19 16:33:48.859394 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:33:48.859404 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:33:48.859415 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:33:48.859426 | orchestrator | ok: [testbed-manager] 2025-09-19 16:33:48.859436 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:33:48.859447 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:33:48.859457 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:33:48.859468 | orchestrator | 2025-09-19 16:33:48.859479 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:33:48.859511 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:33:48.859524 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:33:48.859535 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:33:48.859546 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:33:48.859557 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:33:48.859568 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:33:48.859579 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:33:48.859589 | orchestrator | 2025-09-19 16:33:48.859600 | orchestrator | 2025-09-19 16:33:48.859611 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:33:48.859622 | orchestrator | Friday 19 September 2025 16:33:48 +0000 (0:00:05.070) 0:00:41.731 ****** 2025-09-19 16:33:48.859633 | orchestrator | =============================================================================== 2025-09-19 16:33:48.859651 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.21s 2025-09-19 16:33:48.859662 | orchestrator | Install required packages (Debian) -------------------------------------- 8.01s 2025-09-19 16:33:48.859672 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.07s 2025-09-19 16:33:48.859683 | orchestrator | Copy fact files --------------------------------------------------------- 3.43s 2025-09-19 16:33:48.859694 | orchestrator | Create custom facts directory ------------------------------------------- 1.37s 2025-09-19 16:33:48.859704 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.30s 2025-09-19 16:33:48.859721 | orchestrator | Copy fact file ---------------------------------------------------------- 1.12s 2025-09-19 16:33:49.110799 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.97s 2025-09-19 16:33:49.110902 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.95s 2025-09-19 16:33:49.110917 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.46s 2025-09-19 16:33:49.110928 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-09-19 16:33:49.110939 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.42s 2025-09-19 16:33:49.110950 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-09-19 16:33:49.110961 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-19 16:33:49.110971 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-09-19 16:33:49.110984 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-09-19 16:33:49.111004 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-09-19 16:33:49.111023 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-09-19 16:33:49.390178 | orchestrator | + osism apply bootstrap 2025-09-19 16:34:01.377287 | orchestrator | 2025-09-19 16:34:01 | INFO  | Task b812bb5d-4e19-4169-b162-9e2d0c3a990f (bootstrap) was prepared for execution. 2025-09-19 16:34:01.377397 | orchestrator | 2025-09-19 16:34:01 | INFO  | It takes a moment until task b812bb5d-4e19-4169-b162-9e2d0c3a990f (bootstrap) has been started and output is visible here. 2025-09-19 16:34:16.963107 | orchestrator | 2025-09-19 16:34:16.963222 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-19 16:34:16.963238 | orchestrator | 2025-09-19 16:34:16.963250 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-19 16:34:16.963262 | orchestrator | Friday 19 September 2025 16:34:05 +0000 (0:00:00.162) 0:00:00.162 ****** 2025-09-19 16:34:16.963273 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:16.963285 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:16.963296 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:16.963307 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:16.963317 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:16.963328 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:16.963338 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:16.963349 | orchestrator | 2025-09-19 16:34:16.963360 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 16:34:16.963371 | orchestrator | 2025-09-19 16:34:16.963382 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:34:16.963393 | orchestrator | Friday 19 September 2025 16:34:05 +0000 (0:00:00.235) 0:00:00.398 ****** 2025-09-19 16:34:16.963403 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:16.963414 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:16.963425 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:16.963436 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:16.963447 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:16.963457 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:16.963468 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:16.963553 | orchestrator | 2025-09-19 16:34:16.963567 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-19 16:34:16.963578 | orchestrator | 2025-09-19 16:34:16.963589 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:34:16.963600 | orchestrator | Friday 19 September 2025 16:34:09 +0000 (0:00:03.682) 0:00:04.080 ****** 2025-09-19 16:34:16.963611 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-19 16:34:16.963623 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 16:34:16.963633 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-19 16:34:16.963645 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-19 16:34:16.963658 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 16:34:16.963670 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-19 16:34:16.963682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 16:34:16.963694 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-19 16:34:16.963707 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 16:34:16.963719 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-19 16:34:16.963731 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-19 16:34:16.963743 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-19 16:34:16.963778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-19 16:34:16.963790 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-19 16:34:16.963802 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-19 16:34:16.963814 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 16:34:16.963826 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:16.963838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-19 16:34:16.963850 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 16:34:16.963863 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:16.963875 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-19 16:34:16.963888 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 16:34:16.963900 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-19 16:34:16.963912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 16:34:16.963924 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 16:34:16.963937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 16:34:16.963949 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 16:34:16.963961 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 16:34:16.963973 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-19 16:34:16.963986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-19 16:34:16.963997 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 16:34:16.964008 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-19 16:34:16.964018 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 16:34:16.964029 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 16:34:16.964039 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 16:34:16.964050 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-19 16:34:16.964060 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 16:34:16.964071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-19 16:34:16.964081 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-19 16:34:16.964092 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:16.964102 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-19 16:34:16.964121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 16:34:16.964132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 16:34:16.964144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-19 16:34:16.964155 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-19 16:34:16.964165 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-19 16:34:16.964193 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 16:34:16.964205 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:16.964215 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-19 16:34:16.964226 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:16.964236 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-19 16:34:16.964247 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 16:34:16.964257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-19 16:34:16.964268 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:16.964278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 16:34:16.964289 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:16.964299 | orchestrator | 2025-09-19 16:34:16.964310 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-19 16:34:16.964321 | orchestrator | 2025-09-19 16:34:16.964331 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-19 16:34:16.964342 | orchestrator | Friday 19 September 2025 16:34:09 +0000 (0:00:00.420) 0:00:04.501 ****** 2025-09-19 16:34:16.964352 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:16.964363 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:16.964374 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:16.964384 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:16.964395 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:16.964405 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:16.964415 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:16.964426 | orchestrator | 2025-09-19 16:34:16.964437 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-19 16:34:16.964448 | orchestrator | Friday 19 September 2025 16:34:10 +0000 (0:00:01.181) 0:00:05.683 ****** 2025-09-19 16:34:16.964458 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:16.964469 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:16.964479 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:16.964490 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:16.964500 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:16.964540 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:16.964551 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:16.964562 | orchestrator | 2025-09-19 16:34:16.964573 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-19 16:34:16.964583 | orchestrator | Friday 19 September 2025 16:34:12 +0000 (0:00:01.275) 0:00:06.958 ****** 2025-09-19 16:34:16.964595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:16.964608 | orchestrator | 2025-09-19 16:34:16.964619 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-19 16:34:16.964629 | orchestrator | Friday 19 September 2025 16:34:12 +0000 (0:00:00.306) 0:00:07.265 ****** 2025-09-19 16:34:16.964640 | orchestrator | changed: [testbed-manager] 2025-09-19 16:34:16.964651 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:16.964667 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:16.964678 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:16.964688 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:16.964699 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:16.964709 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:16.964720 | orchestrator | 2025-09-19 16:34:16.964740 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-19 16:34:16.964751 | orchestrator | Friday 19 September 2025 16:34:14 +0000 (0:00:02.015) 0:00:09.281 ****** 2025-09-19 16:34:16.964761 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:16.964773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:16.964785 | orchestrator | 2025-09-19 16:34:16.964796 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-19 16:34:16.964807 | orchestrator | Friday 19 September 2025 16:34:14 +0000 (0:00:00.245) 0:00:09.526 ****** 2025-09-19 16:34:16.964817 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:16.964828 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:16.964839 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:16.964849 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:16.964860 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:16.964870 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:16.964880 | orchestrator | 2025-09-19 16:34:16.964891 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-19 16:34:16.964902 | orchestrator | Friday 19 September 2025 16:34:15 +0000 (0:00:01.055) 0:00:10.582 ****** 2025-09-19 16:34:16.964913 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:16.964923 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:16.964934 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:16.964944 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:16.964955 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:16.964965 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:16.964976 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:16.964986 | orchestrator | 2025-09-19 16:34:16.964997 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-19 16:34:16.965008 | orchestrator | Friday 19 September 2025 16:34:16 +0000 (0:00:00.561) 0:00:11.144 ****** 2025-09-19 16:34:16.965018 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:16.965029 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:16.965039 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:16.965049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:16.965060 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:16.965070 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:16.965081 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:16.965091 | orchestrator | 2025-09-19 16:34:16.965102 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-19 16:34:16.965113 | orchestrator | Friday 19 September 2025 16:34:16 +0000 (0:00:00.416) 0:00:11.560 ****** 2025-09-19 16:34:16.965124 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:16.965135 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:16.965151 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:29.538589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:29.538705 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:29.538720 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:29.538732 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:29.538743 | orchestrator | 2025-09-19 16:34:29.538755 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-19 16:34:29.538768 | orchestrator | Friday 19 September 2025 16:34:17 +0000 (0:00:00.215) 0:00:11.776 ****** 2025-09-19 16:34:29.538780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:29.538809 | orchestrator | 2025-09-19 16:34:29.538820 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-19 16:34:29.538832 | orchestrator | Friday 19 September 2025 16:34:17 +0000 (0:00:00.290) 0:00:12.066 ****** 2025-09-19 16:34:29.538863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:29.538875 | orchestrator | 2025-09-19 16:34:29.538886 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-19 16:34:29.538897 | orchestrator | Friday 19 September 2025 16:34:17 +0000 (0:00:00.270) 0:00:12.337 ****** 2025-09-19 16:34:29.538908 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.538920 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.538930 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.538941 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.538951 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.538961 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.538972 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.538982 | orchestrator | 2025-09-19 16:34:29.538993 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-19 16:34:29.539004 | orchestrator | Friday 19 September 2025 16:34:19 +0000 (0:00:01.459) 0:00:13.797 ****** 2025-09-19 16:34:29.539014 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:29.539025 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:29.539035 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:29.539046 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:29.539058 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:29.539071 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:29.539082 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:29.539094 | orchestrator | 2025-09-19 16:34:29.539107 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-19 16:34:29.539119 | orchestrator | Friday 19 September 2025 16:34:19 +0000 (0:00:00.210) 0:00:14.007 ****** 2025-09-19 16:34:29.539132 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.539143 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.539155 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.539167 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.539179 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.539191 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.539203 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.539214 | orchestrator | 2025-09-19 16:34:29.539227 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-19 16:34:29.539238 | orchestrator | Friday 19 September 2025 16:34:19 +0000 (0:00:00.605) 0:00:14.613 ****** 2025-09-19 16:34:29.539251 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:29.539263 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:29.539275 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:29.539288 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:29.539299 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:29.539311 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:29.539323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:29.539335 | orchestrator | 2025-09-19 16:34:29.539347 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-19 16:34:29.539360 | orchestrator | Friday 19 September 2025 16:34:20 +0000 (0:00:00.213) 0:00:14.827 ****** 2025-09-19 16:34:29.539372 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.539384 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:29.539396 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:29.539408 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:29.539418 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:29.539429 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:29.539440 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:29.539450 | orchestrator | 2025-09-19 16:34:29.539461 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-19 16:34:29.539472 | orchestrator | Friday 19 September 2025 16:34:20 +0000 (0:00:00.654) 0:00:15.482 ****** 2025-09-19 16:34:29.539490 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.539500 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:29.539511 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:29.539548 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:29.539559 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:29.539569 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:29.539580 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:29.539590 | orchestrator | 2025-09-19 16:34:29.539601 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-19 16:34:29.539612 | orchestrator | Friday 19 September 2025 16:34:22 +0000 (0:00:01.308) 0:00:16.790 ****** 2025-09-19 16:34:29.539622 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.539633 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.539644 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.539654 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.539666 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.539676 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.539687 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.539697 | orchestrator | 2025-09-19 16:34:29.539708 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-19 16:34:29.539719 | orchestrator | Friday 19 September 2025 16:34:23 +0000 (0:00:01.213) 0:00:18.003 ****** 2025-09-19 16:34:29.539790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:29.539804 | orchestrator | 2025-09-19 16:34:29.539814 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-19 16:34:29.539825 | orchestrator | Friday 19 September 2025 16:34:23 +0000 (0:00:00.392) 0:00:18.396 ****** 2025-09-19 16:34:29.539836 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:29.539847 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:34:29.539857 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:34:29.539868 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:29.539878 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:29.539889 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:29.539900 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:34:29.539910 | orchestrator | 2025-09-19 16:34:29.539921 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-19 16:34:29.539932 | orchestrator | Friday 19 September 2025 16:34:24 +0000 (0:00:01.292) 0:00:19.688 ****** 2025-09-19 16:34:29.539942 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.539953 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.539963 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.539974 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.539985 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.539995 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540006 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540016 | orchestrator | 2025-09-19 16:34:29.540027 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-19 16:34:29.540038 | orchestrator | Friday 19 September 2025 16:34:25 +0000 (0:00:00.213) 0:00:19.901 ****** 2025-09-19 16:34:29.540049 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540059 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.540070 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.540080 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.540091 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540101 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540112 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540122 | orchestrator | 2025-09-19 16:34:29.540133 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-19 16:34:29.540143 | orchestrator | Friday 19 September 2025 16:34:25 +0000 (0:00:00.246) 0:00:20.148 ****** 2025-09-19 16:34:29.540154 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540165 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.540183 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.540193 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.540203 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540214 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540224 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540235 | orchestrator | 2025-09-19 16:34:29.540246 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-19 16:34:29.540256 | orchestrator | Friday 19 September 2025 16:34:25 +0000 (0:00:00.263) 0:00:20.411 ****** 2025-09-19 16:34:29.540273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:34:29.540286 | orchestrator | 2025-09-19 16:34:29.540297 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-19 16:34:29.540307 | orchestrator | Friday 19 September 2025 16:34:25 +0000 (0:00:00.305) 0:00:20.717 ****** 2025-09-19 16:34:29.540318 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540329 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.540339 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.540350 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.540363 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540382 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540400 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540430 | orchestrator | 2025-09-19 16:34:29.540448 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-19 16:34:29.540465 | orchestrator | Friday 19 September 2025 16:34:26 +0000 (0:00:00.515) 0:00:21.232 ****** 2025-09-19 16:34:29.540482 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:34:29.540499 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:34:29.540542 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:34:29.540560 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:34:29.540578 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:34:29.540595 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:34:29.540611 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:34:29.540628 | orchestrator | 2025-09-19 16:34:29.540644 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-19 16:34:29.540661 | orchestrator | Friday 19 September 2025 16:34:26 +0000 (0:00:00.238) 0:00:21.471 ****** 2025-09-19 16:34:29.540678 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540695 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:29.540713 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540730 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:34:29.540748 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540766 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:34:29.540784 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540802 | orchestrator | 2025-09-19 16:34:29.540820 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-19 16:34:29.540838 | orchestrator | Friday 19 September 2025 16:34:27 +0000 (0:00:01.100) 0:00:22.571 ****** 2025-09-19 16:34:29.540850 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540860 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:34:29.540871 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:34:29.540882 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540892 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:34:29.540902 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.540913 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:34:29.540923 | orchestrator | 2025-09-19 16:34:29.540934 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-19 16:34:29.540944 | orchestrator | Friday 19 September 2025 16:34:28 +0000 (0:00:00.578) 0:00:23.150 ****** 2025-09-19 16:34:29.540955 | orchestrator | ok: [testbed-manager] 2025-09-19 16:34:29.540966 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:34:29.540976 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:34:29.540987 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:34:29.541021 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.983699 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.983812 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.983827 | orchestrator | 2025-09-19 16:35:12.983839 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-19 16:35:12.983851 | orchestrator | Friday 19 September 2025 16:34:29 +0000 (0:00:01.113) 0:00:24.263 ****** 2025-09-19 16:35:12.983861 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.983871 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.983881 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.983890 | orchestrator | changed: [testbed-manager] 2025-09-19 16:35:12.983900 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.983910 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:35:12.983920 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.983929 | orchestrator | 2025-09-19 16:35:12.983939 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-19 16:35:12.983949 | orchestrator | Friday 19 September 2025 16:34:47 +0000 (0:00:18.167) 0:00:42.431 ****** 2025-09-19 16:35:12.983959 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.983969 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.983978 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.983988 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.983997 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.984006 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.984016 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.984026 | orchestrator | 2025-09-19 16:35:12.984035 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-19 16:35:12.984045 | orchestrator | Friday 19 September 2025 16:34:47 +0000 (0:00:00.251) 0:00:42.683 ****** 2025-09-19 16:35:12.984055 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.984064 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.984074 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.984084 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.984093 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.984103 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.984112 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.984121 | orchestrator | 2025-09-19 16:35:12.984131 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-19 16:35:12.984141 | orchestrator | Friday 19 September 2025 16:34:48 +0000 (0:00:00.228) 0:00:42.912 ****** 2025-09-19 16:35:12.984153 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.984164 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.984175 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.984186 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.984197 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.984208 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.984218 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.984229 | orchestrator | 2025-09-19 16:35:12.984240 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-19 16:35:12.984250 | orchestrator | Friday 19 September 2025 16:34:48 +0000 (0:00:00.226) 0:00:43.138 ****** 2025-09-19 16:35:12.984280 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:35:12.984294 | orchestrator | 2025-09-19 16:35:12.984306 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-19 16:35:12.984317 | orchestrator | Friday 19 September 2025 16:34:48 +0000 (0:00:00.330) 0:00:43.469 ****** 2025-09-19 16:35:12.984328 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.984338 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.984349 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.984360 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.984370 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.984381 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.984412 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.984424 | orchestrator | 2025-09-19 16:35:12.984435 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-19 16:35:12.984446 | orchestrator | Friday 19 September 2025 16:34:50 +0000 (0:00:01.813) 0:00:45.283 ****** 2025-09-19 16:35:12.984457 | orchestrator | changed: [testbed-manager] 2025-09-19 16:35:12.984467 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.984478 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:35:12.984489 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:35:12.984499 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:35:12.984509 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.984518 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:35:12.984528 | orchestrator | 2025-09-19 16:35:12.984561 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-19 16:35:12.984574 | orchestrator | Friday 19 September 2025 16:34:51 +0000 (0:00:01.164) 0:00:46.448 ****** 2025-09-19 16:35:12.984592 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.984606 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.984622 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.984639 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.984655 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.984671 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.984682 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.984691 | orchestrator | 2025-09-19 16:35:12.984701 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-19 16:35:12.984711 | orchestrator | Friday 19 September 2025 16:34:52 +0000 (0:00:00.956) 0:00:47.404 ****** 2025-09-19 16:35:12.984721 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:35:12.984733 | orchestrator | 2025-09-19 16:35:12.984743 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-19 16:35:12.984753 | orchestrator | Friday 19 September 2025 16:34:53 +0000 (0:00:00.353) 0:00:47.758 ****** 2025-09-19 16:35:12.984763 | orchestrator | changed: [testbed-manager] 2025-09-19 16:35:12.984772 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:35:12.984782 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:35:12.984791 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.984801 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:35:12.984810 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.984819 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:35:12.984829 | orchestrator | 2025-09-19 16:35:12.984856 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-19 16:35:12.984866 | orchestrator | Friday 19 September 2025 16:34:54 +0000 (0:00:01.153) 0:00:48.911 ****** 2025-09-19 16:35:12.984876 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:35:12.984885 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:35:12.984894 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:35:12.984904 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:35:12.984913 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:35:12.984922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:35:12.984931 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:35:12.984941 | orchestrator | 2025-09-19 16:35:12.984950 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-19 16:35:12.984960 | orchestrator | Friday 19 September 2025 16:34:54 +0000 (0:00:00.351) 0:00:49.263 ****** 2025-09-19 16:35:12.984969 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:35:12.984978 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:35:12.984988 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.984997 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:35:12.985006 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.985015 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:35:12.985024 | orchestrator | changed: [testbed-manager] 2025-09-19 16:35:12.985043 | orchestrator | 2025-09-19 16:35:12.985053 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-19 16:35:12.985062 | orchestrator | Friday 19 September 2025 16:35:06 +0000 (0:00:12.473) 0:01:01.736 ****** 2025-09-19 16:35:12.985072 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985081 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985090 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985100 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985109 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985118 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985127 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985136 | orchestrator | 2025-09-19 16:35:12.985146 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-19 16:35:12.985155 | orchestrator | Friday 19 September 2025 16:35:08 +0000 (0:00:01.684) 0:01:03.421 ****** 2025-09-19 16:35:12.985165 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985174 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985183 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985193 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985202 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985211 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985221 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985230 | orchestrator | 2025-09-19 16:35:12.985239 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-19 16:35:12.985249 | orchestrator | Friday 19 September 2025 16:35:09 +0000 (0:00:00.876) 0:01:04.298 ****** 2025-09-19 16:35:12.985258 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985268 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985277 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985286 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985295 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985305 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985315 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985324 | orchestrator | 2025-09-19 16:35:12.985334 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-19 16:35:12.985343 | orchestrator | Friday 19 September 2025 16:35:09 +0000 (0:00:00.222) 0:01:04.521 ****** 2025-09-19 16:35:12.985353 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985362 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985371 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985381 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985390 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985400 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985409 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985418 | orchestrator | 2025-09-19 16:35:12.985428 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-19 16:35:12.985437 | orchestrator | Friday 19 September 2025 16:35:10 +0000 (0:00:00.231) 0:01:04.753 ****** 2025-09-19 16:35:12.985447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:35:12.985457 | orchestrator | 2025-09-19 16:35:12.985467 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-19 16:35:12.985476 | orchestrator | Friday 19 September 2025 16:35:10 +0000 (0:00:00.277) 0:01:05.030 ****** 2025-09-19 16:35:12.985486 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985495 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985505 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985514 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985523 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985533 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985559 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985569 | orchestrator | 2025-09-19 16:35:12.985579 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-19 16:35:12.985595 | orchestrator | Friday 19 September 2025 16:35:12 +0000 (0:00:01.790) 0:01:06.821 ****** 2025-09-19 16:35:12.985604 | orchestrator | changed: [testbed-manager] 2025-09-19 16:35:12.985614 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:35:12.985623 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:35:12.985633 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:35:12.985651 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:35:12.985661 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:35:12.985671 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:35:12.985680 | orchestrator | 2025-09-19 16:35:12.985690 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-19 16:35:12.985699 | orchestrator | Friday 19 September 2025 16:35:12 +0000 (0:00:00.687) 0:01:07.509 ****** 2025-09-19 16:35:12.985709 | orchestrator | ok: [testbed-manager] 2025-09-19 16:35:12.985718 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:35:12.985728 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:35:12.985737 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:35:12.985746 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:35:12.985756 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:35:12.985765 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:35:12.985774 | orchestrator | 2025-09-19 16:35:12.985790 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-19 16:37:31.357906 | orchestrator | Friday 19 September 2025 16:35:12 +0000 (0:00:00.205) 0:01:07.714 ****** 2025-09-19 16:37:31.358085 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:31.358105 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:31.358116 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:31.358127 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:31.358138 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:31.358149 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:31.358160 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:31.358171 | orchestrator | 2025-09-19 16:37:31.358183 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-19 16:37:31.358194 | orchestrator | Friday 19 September 2025 16:35:14 +0000 (0:00:01.452) 0:01:09.166 ****** 2025-09-19 16:37:31.358206 | orchestrator | changed: [testbed-manager] 2025-09-19 16:37:31.358217 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:37:31.358228 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:37:31.358239 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:37:31.358249 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:37:31.358260 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:37:31.358271 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:37:31.358281 | orchestrator | 2025-09-19 16:37:31.358293 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-19 16:37:31.358304 | orchestrator | Friday 19 September 2025 16:35:16 +0000 (0:00:02.073) 0:01:11.239 ****** 2025-09-19 16:37:31.358315 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:31.358326 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:31.358336 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:31.358347 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:31.358358 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:31.358368 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:31.358379 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:31.358390 | orchestrator | 2025-09-19 16:37:31.358401 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-19 16:37:31.358411 | orchestrator | Friday 19 September 2025 16:35:19 +0000 (0:00:03.144) 0:01:14.384 ****** 2025-09-19 16:37:31.358422 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:31.358433 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:31.358446 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:31.358465 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:31.358485 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:31.358505 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:31.358530 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:31.358557 | orchestrator | 2025-09-19 16:37:31.358578 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-19 16:37:31.358674 | orchestrator | Friday 19 September 2025 16:35:55 +0000 (0:00:35.420) 0:01:49.805 ****** 2025-09-19 16:37:31.358700 | orchestrator | changed: [testbed-manager] 2025-09-19 16:37:31.358721 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:37:31.358743 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:37:31.358763 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:37:31.358784 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:37:31.358805 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:37:31.358828 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:37:31.358855 | orchestrator | 2025-09-19 16:37:31.358895 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-19 16:37:31.358917 | orchestrator | Friday 19 September 2025 16:37:12 +0000 (0:01:17.297) 0:03:07.102 ****** 2025-09-19 16:37:31.358938 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:31.358951 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:31.358962 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:31.358972 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:31.358983 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:31.358993 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:31.359003 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:31.359014 | orchestrator | 2025-09-19 16:37:31.359025 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-19 16:37:31.359036 | orchestrator | Friday 19 September 2025 16:37:14 +0000 (0:00:01.840) 0:03:08.943 ****** 2025-09-19 16:37:31.359047 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:31.359057 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:31.359067 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:31.359078 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:31.359088 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:31.359098 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:31.359109 | orchestrator | changed: [testbed-manager] 2025-09-19 16:37:31.359119 | orchestrator | 2025-09-19 16:37:31.359130 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-19 16:37:31.359141 | orchestrator | Friday 19 September 2025 16:37:25 +0000 (0:00:11.366) 0:03:20.309 ****** 2025-09-19 16:37:31.359161 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-19 16:37:31.359178 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-19 16:37:31.359218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-19 16:37:31.359238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-19 16:37:31.359261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-19 16:37:31.359273 | orchestrator | 2025-09-19 16:37:31.359283 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-19 16:37:31.359294 | orchestrator | Friday 19 September 2025 16:37:25 +0000 (0:00:00.356) 0:03:20.666 ****** 2025-09-19 16:37:31.359305 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 16:37:31.359316 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:31.359327 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 16:37:31.359337 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 16:37:31.359348 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:37:31.359358 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:37:31.359369 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-19 16:37:31.359380 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:37:31.359390 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 16:37:31.359401 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 16:37:31.359411 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 16:37:31.359421 | orchestrator | 2025-09-19 16:37:31.359432 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-19 16:37:31.359448 | orchestrator | Friday 19 September 2025 16:37:26 +0000 (0:00:00.627) 0:03:21.293 ****** 2025-09-19 16:37:31.359458 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 16:37:31.359470 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 16:37:31.359481 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 16:37:31.359491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 16:37:31.359502 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 16:37:31.359512 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 16:37:31.359523 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 16:37:31.359533 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 16:37:31.359544 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 16:37:31.359554 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 16:37:31.359565 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:31.359576 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 16:37:31.359586 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 16:37:31.359624 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 16:37:31.359645 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 16:37:31.359664 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 16:37:31.359683 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 16:37:31.359708 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 16:37:31.359719 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 16:37:31.359729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 16:37:31.359740 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 16:37:31.359758 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 16:37:33.442536 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 16:37:33.442692 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 16:37:33.442709 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 16:37:33.442721 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 16:37:33.442733 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:37:33.442745 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 16:37:33.442756 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 16:37:33.442767 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 16:37:33.442777 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 16:37:33.442788 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 16:37:33.442799 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:37:33.442809 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-19 16:37:33.442820 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-19 16:37:33.442831 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-19 16:37:33.442842 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-19 16:37:33.442852 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-19 16:37:33.442863 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-19 16:37:33.442873 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-19 16:37:33.442884 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-19 16:37:33.442894 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-19 16:37:33.442906 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-19 16:37:33.442917 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:37:33.442928 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 16:37:33.442938 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 16:37:33.442949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-19 16:37:33.442959 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 16:37:33.442970 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 16:37:33.442981 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-19 16:37:33.442991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 16:37:33.443024 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 16:37:33.443035 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-19 16:37:33.443046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443056 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443067 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443079 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443091 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443103 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 16:37:33.443115 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 16:37:33.443126 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 16:37:33.443138 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 16:37:33.443150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-19 16:37:33.443162 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 16:37:33.443174 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 16:37:33.443202 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-19 16:37:33.443215 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 16:37:33.443245 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 16:37:33.443258 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-19 16:37:33.443270 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 16:37:33.443282 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 16:37:33.443294 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-19 16:37:33.443306 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-19 16:37:33.443318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-19 16:37:33.443329 | orchestrator | 2025-09-19 16:37:33.443343 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-19 16:37:33.443355 | orchestrator | Friday 19 September 2025 16:37:31 +0000 (0:00:04.790) 0:03:26.083 ****** 2025-09-19 16:37:33.443367 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443379 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443391 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443403 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443428 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443439 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-19 16:37:33.443450 | orchestrator | 2025-09-19 16:37:33.443461 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-19 16:37:33.443480 | orchestrator | Friday 19 September 2025 16:37:31 +0000 (0:00:00.605) 0:03:26.689 ****** 2025-09-19 16:37:33.443491 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 16:37:33.443502 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:33.443518 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 16:37:33.443529 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 16:37:33.443540 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:37:33.443550 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-19 16:37:33.443561 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:37:33.443572 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:37:33.443583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 16:37:33.443594 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 16:37:33.443625 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-19 16:37:33.443645 | orchestrator | 2025-09-19 16:37:33.443664 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-19 16:37:33.443689 | orchestrator | Friday 19 September 2025 16:37:32 +0000 (0:00:00.564) 0:03:27.253 ****** 2025-09-19 16:37:33.443718 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 16:37:33.443736 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:33.443754 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 16:37:33.443773 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:37:33.443794 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 16:37:33.443815 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-19 16:37:33.443835 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:37:33.443854 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:37:33.443865 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 16:37:33.443875 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 16:37:33.443886 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-19 16:37:33.443897 | orchestrator | 2025-09-19 16:37:33.443907 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-19 16:37:33.443918 | orchestrator | Friday 19 September 2025 16:37:33 +0000 (0:00:00.636) 0:03:27.890 ****** 2025-09-19 16:37:33.443929 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:33.443939 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:37:33.443950 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:37:33.443961 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:37:33.443971 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:37:33.443991 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:37:45.126102 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:37:45.126208 | orchestrator | 2025-09-19 16:37:45.126223 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-19 16:37:45.126233 | orchestrator | Friday 19 September 2025 16:37:33 +0000 (0:00:00.288) 0:03:28.179 ****** 2025-09-19 16:37:45.126242 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:45.126254 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:45.126263 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:45.126272 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:45.126300 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:45.126309 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:45.126318 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:45.126326 | orchestrator | 2025-09-19 16:37:45.126335 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-19 16:37:45.126344 | orchestrator | Friday 19 September 2025 16:37:39 +0000 (0:00:05.653) 0:03:33.832 ****** 2025-09-19 16:37:45.126352 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-19 16:37:45.126361 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-19 16:37:45.126370 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:37:45.126378 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:37:45.126387 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-19 16:37:45.126395 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-19 16:37:45.126404 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:37:45.126412 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:37:45.126421 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-19 16:37:45.126429 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:37:45.126437 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-19 16:37:45.126450 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:37:45.126459 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-19 16:37:45.126468 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:37:45.126476 | orchestrator | 2025-09-19 16:37:45.126485 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-19 16:37:45.126494 | orchestrator | Friday 19 September 2025 16:37:39 +0000 (0:00:00.316) 0:03:34.148 ****** 2025-09-19 16:37:45.126503 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-19 16:37:45.126512 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-19 16:37:45.126521 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-19 16:37:45.126529 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-19 16:37:45.126538 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-19 16:37:45.126546 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-19 16:37:45.126555 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-19 16:37:45.126570 | orchestrator | 2025-09-19 16:37:45.126585 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-19 16:37:45.126728 | orchestrator | Friday 19 September 2025 16:37:40 +0000 (0:00:01.008) 0:03:35.157 ****** 2025-09-19 16:37:45.126763 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:37:45.126776 | orchestrator | 2025-09-19 16:37:45.126786 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-19 16:37:45.126796 | orchestrator | Friday 19 September 2025 16:37:40 +0000 (0:00:00.474) 0:03:35.631 ****** 2025-09-19 16:37:45.126806 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:45.126816 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:45.126826 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:45.126836 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:45.126845 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:45.126855 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:45.126865 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:45.126875 | orchestrator | 2025-09-19 16:37:45.126885 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-19 16:37:45.126894 | orchestrator | Friday 19 September 2025 16:37:42 +0000 (0:00:01.236) 0:03:36.867 ****** 2025-09-19 16:37:45.126904 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:45.126913 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:45.126923 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:45.126933 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:45.126942 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:45.126952 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:45.126972 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:45.126980 | orchestrator | 2025-09-19 16:37:45.126989 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-19 16:37:45.126998 | orchestrator | Friday 19 September 2025 16:37:42 +0000 (0:00:00.609) 0:03:37.477 ****** 2025-09-19 16:37:45.127006 | orchestrator | changed: [testbed-manager] 2025-09-19 16:37:45.127015 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:37:45.127023 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:37:45.127032 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:37:45.127040 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:37:45.127048 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:37:45.127057 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:37:45.127065 | orchestrator | 2025-09-19 16:37:45.127074 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-19 16:37:45.127082 | orchestrator | Friday 19 September 2025 16:37:43 +0000 (0:00:00.596) 0:03:38.074 ****** 2025-09-19 16:37:45.127091 | orchestrator | ok: [testbed-manager] 2025-09-19 16:37:45.127099 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:37:45.127108 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:37:45.127116 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:37:45.127124 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:37:45.127133 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:37:45.127141 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:37:45.127149 | orchestrator | 2025-09-19 16:37:45.127158 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-19 16:37:45.127166 | orchestrator | Friday 19 September 2025 16:37:44 +0000 (0:00:00.694) 0:03:38.768 ****** 2025-09-19 16:37:45.127197 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298303.5572834, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127210 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298337.6859145, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127220 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298325.6174614, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127234 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298327.1142833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127243 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298338.713496, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127259 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298327.7860594, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127268 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758298336.7663367, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:37:45.127292 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606227 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606344 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606359 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606396 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606425 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606437 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 16:38:02.606448 | orchestrator | 2025-09-19 16:38:02.606461 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-19 16:38:02.606474 | orchestrator | Friday 19 September 2025 16:37:45 +0000 (0:00:01.080) 0:03:39.848 ****** 2025-09-19 16:38:02.606485 | orchestrator | changed: [testbed-manager] 2025-09-19 16:38:02.606496 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:38:02.606506 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:38:02.606517 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:38:02.606527 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:38:02.606538 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:38:02.606548 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:38:02.606559 | orchestrator | 2025-09-19 16:38:02.606570 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-19 16:38:02.606580 | orchestrator | Friday 19 September 2025 16:37:46 +0000 (0:00:01.137) 0:03:40.986 ****** 2025-09-19 16:38:02.606591 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:38:02.606602 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:38:02.606658 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:38:02.606669 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:38:02.606695 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:38:02.606706 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:38:02.606717 | orchestrator | changed: [testbed-manager] 2025-09-19 16:38:02.606727 | orchestrator | 2025-09-19 16:38:02.606738 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-19 16:38:02.606748 | orchestrator | Friday 19 September 2025 16:37:48 +0000 (0:00:01.861) 0:03:42.848 ****** 2025-09-19 16:38:02.606759 | orchestrator | changed: [testbed-manager] 2025-09-19 16:38:02.606772 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:38:02.606784 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:38:02.606797 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:38:02.606809 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:38:02.606821 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:38:02.606833 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:38:02.606845 | orchestrator | 2025-09-19 16:38:02.606857 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-19 16:38:02.606870 | orchestrator | Friday 19 September 2025 16:37:49 +0000 (0:00:01.140) 0:03:43.989 ****** 2025-09-19 16:38:02.606894 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:38:02.606906 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:38:02.606919 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:38:02.606931 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:38:02.606944 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:38:02.606954 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:38:02.606965 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:38:02.606975 | orchestrator | 2025-09-19 16:38:02.606986 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-19 16:38:02.606997 | orchestrator | Friday 19 September 2025 16:37:49 +0000 (0:00:00.301) 0:03:44.290 ****** 2025-09-19 16:38:02.607008 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607020 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607031 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607041 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607051 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607062 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:38:02.607072 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:38:02.607083 | orchestrator | 2025-09-19 16:38:02.607093 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-19 16:38:02.607104 | orchestrator | Friday 19 September 2025 16:37:50 +0000 (0:00:00.742) 0:03:45.033 ****** 2025-09-19 16:38:02.607121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:38:02.607134 | orchestrator | 2025-09-19 16:38:02.607145 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-19 16:38:02.607155 | orchestrator | Friday 19 September 2025 16:37:50 +0000 (0:00:00.386) 0:03:45.419 ****** 2025-09-19 16:38:02.607166 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607176 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:38:02.607187 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:38:02.607197 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:38:02.607208 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:38:02.607218 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:38:02.607228 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:38:02.607239 | orchestrator | 2025-09-19 16:38:02.607250 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-19 16:38:02.607260 | orchestrator | Friday 19 September 2025 16:37:59 +0000 (0:00:08.490) 0:03:53.909 ****** 2025-09-19 16:38:02.607271 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607281 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607292 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607302 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607313 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607323 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:38:02.607334 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:38:02.607345 | orchestrator | 2025-09-19 16:38:02.607355 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-19 16:38:02.607366 | orchestrator | Friday 19 September 2025 16:38:00 +0000 (0:00:01.281) 0:03:55.191 ****** 2025-09-19 16:38:02.607377 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607387 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607397 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607408 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607418 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:38:02.607428 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607439 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:38:02.607449 | orchestrator | 2025-09-19 16:38:02.607460 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-19 16:38:02.607471 | orchestrator | Friday 19 September 2025 16:38:01 +0000 (0:00:01.189) 0:03:56.380 ****** 2025-09-19 16:38:02.607481 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607499 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607509 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607520 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607530 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607540 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:38:02.607551 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:38:02.607561 | orchestrator | 2025-09-19 16:38:02.607572 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-19 16:38:02.607583 | orchestrator | Friday 19 September 2025 16:38:01 +0000 (0:00:00.266) 0:03:56.647 ****** 2025-09-19 16:38:02.607593 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607604 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607632 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607643 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607659 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607676 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:38:02.607691 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:38:02.607701 | orchestrator | 2025-09-19 16:38:02.607712 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-19 16:38:02.607722 | orchestrator | Friday 19 September 2025 16:38:02 +0000 (0:00:00.407) 0:03:57.055 ****** 2025-09-19 16:38:02.607733 | orchestrator | ok: [testbed-manager] 2025-09-19 16:38:02.607743 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:38:02.607754 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:38:02.607764 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:38:02.607774 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:38:02.607792 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:13.927381 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:13.927489 | orchestrator | 2025-09-19 16:39:13.927503 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-19 16:39:13.927515 | orchestrator | Friday 19 September 2025 16:38:02 +0000 (0:00:00.286) 0:03:57.341 ****** 2025-09-19 16:39:13.927526 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:13.927536 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:13.927546 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:13.927556 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:13.927565 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:13.927575 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:13.927584 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:13.927593 | orchestrator | 2025-09-19 16:39:13.927603 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-19 16:39:13.927614 | orchestrator | Friday 19 September 2025 16:38:08 +0000 (0:00:05.682) 0:04:03.024 ****** 2025-09-19 16:39:13.927625 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:39:13.927675 | orchestrator | 2025-09-19 16:39:13.927686 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-19 16:39:13.927695 | orchestrator | Friday 19 September 2025 16:38:08 +0000 (0:00:00.378) 0:04:03.402 ****** 2025-09-19 16:39:13.927706 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927715 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-19 16:39:13.927726 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927736 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:13.927745 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-19 16:39:13.927755 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:13.927765 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927774 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-19 16:39:13.927784 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:13.927793 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927803 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-19 16:39:13.927833 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927856 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-19 16:39:13.927866 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:13.927876 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927885 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-19 16:39:13.927894 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:13.927904 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:13.927913 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-19 16:39:13.927922 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-19 16:39:13.927934 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:13.927945 | orchestrator | 2025-09-19 16:39:13.927956 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-19 16:39:13.927966 | orchestrator | Friday 19 September 2025 16:38:09 +0000 (0:00:00.360) 0:04:03.762 ****** 2025-09-19 16:39:13.927977 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:39:13.927988 | orchestrator | 2025-09-19 16:39:13.927999 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-19 16:39:13.928010 | orchestrator | Friday 19 September 2025 16:38:09 +0000 (0:00:00.404) 0:04:04.167 ****** 2025-09-19 16:39:13.928021 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-19 16:39:13.928031 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-19 16:39:13.928041 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:13.928052 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:13.928063 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-19 16:39:13.928073 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-19 16:39:13.928084 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:13.928094 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:13.928105 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-19 16:39:13.928115 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-19 16:39:13.928125 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:13.928136 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:13.928147 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-19 16:39:13.928157 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:13.928167 | orchestrator | 2025-09-19 16:39:13.928179 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-19 16:39:13.928189 | orchestrator | Friday 19 September 2025 16:38:09 +0000 (0:00:00.312) 0:04:04.480 ****** 2025-09-19 16:39:13.928200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:39:13.928211 | orchestrator | 2025-09-19 16:39:13.928221 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-19 16:39:13.928232 | orchestrator | Friday 19 September 2025 16:38:10 +0000 (0:00:00.417) 0:04:04.898 ****** 2025-09-19 16:39:13.928242 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.928267 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.928278 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.928288 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.928297 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.928307 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.928316 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.928325 | orchestrator | 2025-09-19 16:39:13.928342 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-19 16:39:13.928351 | orchestrator | Friday 19 September 2025 16:38:45 +0000 (0:00:34.937) 0:04:39.835 ****** 2025-09-19 16:39:13.928361 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.928370 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.928380 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.928389 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.928398 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.928408 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.928417 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.928426 | orchestrator | 2025-09-19 16:39:13.928436 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-19 16:39:13.928446 | orchestrator | Friday 19 September 2025 16:38:53 +0000 (0:00:08.639) 0:04:48.475 ****** 2025-09-19 16:39:13.928455 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.928464 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.928474 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.928483 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.928493 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.928502 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.928511 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.928521 | orchestrator | 2025-09-19 16:39:13.928530 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-19 16:39:13.928540 | orchestrator | Friday 19 September 2025 16:39:01 +0000 (0:00:07.852) 0:04:56.328 ****** 2025-09-19 16:39:13.928550 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:13.928559 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:13.928568 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:13.928578 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:13.928587 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:13.928596 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:13.928606 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:13.928615 | orchestrator | 2025-09-19 16:39:13.928625 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-19 16:39:13.928650 | orchestrator | Friday 19 September 2025 16:39:03 +0000 (0:00:01.763) 0:04:58.091 ****** 2025-09-19 16:39:13.928660 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.928670 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.928684 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.928694 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.928703 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.928713 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.928722 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.928731 | orchestrator | 2025-09-19 16:39:13.928741 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-19 16:39:13.928750 | orchestrator | Friday 19 September 2025 16:39:09 +0000 (0:00:06.555) 0:05:04.646 ****** 2025-09-19 16:39:13.928760 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:39:13.928772 | orchestrator | 2025-09-19 16:39:13.928782 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-19 16:39:13.928791 | orchestrator | Friday 19 September 2025 16:39:10 +0000 (0:00:00.511) 0:05:05.158 ****** 2025-09-19 16:39:13.928800 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.928810 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.928819 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.928829 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.928838 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.928847 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.928857 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.928866 | orchestrator | 2025-09-19 16:39:13.928876 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-19 16:39:13.928892 | orchestrator | Friday 19 September 2025 16:39:11 +0000 (0:00:00.717) 0:05:05.876 ****** 2025-09-19 16:39:13.928902 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:13.928911 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:13.928921 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:13.928930 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:13.928940 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:13.928949 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:13.928958 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:13.928968 | orchestrator | 2025-09-19 16:39:13.928977 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-19 16:39:13.928986 | orchestrator | Friday 19 September 2025 16:39:12 +0000 (0:00:01.718) 0:05:07.594 ****** 2025-09-19 16:39:13.928996 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:13.929005 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:13.929015 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:13.929024 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:13.929034 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:13.929043 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:13.929052 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:13.929061 | orchestrator | 2025-09-19 16:39:13.929071 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-19 16:39:13.929081 | orchestrator | Friday 19 September 2025 16:39:13 +0000 (0:00:00.787) 0:05:08.381 ****** 2025-09-19 16:39:13.929090 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:13.929100 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:13.929109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:13.929119 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:13.929128 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:13.929137 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:13.929147 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:13.929156 | orchestrator | 2025-09-19 16:39:13.929166 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-19 16:39:13.929181 | orchestrator | Friday 19 September 2025 16:39:13 +0000 (0:00:00.274) 0:05:08.656 ****** 2025-09-19 16:39:41.088891 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:41.089005 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:41.089020 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:41.089032 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:41.089043 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:41.089054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:41.089065 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:41.089077 | orchestrator | 2025-09-19 16:39:41.089090 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-19 16:39:41.089102 | orchestrator | Friday 19 September 2025 16:39:14 +0000 (0:00:00.373) 0:05:09.030 ****** 2025-09-19 16:39:41.089113 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.089125 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:41.089136 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:41.089146 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:41.089157 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:41.089168 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:41.089178 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:41.089189 | orchestrator | 2025-09-19 16:39:41.089200 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-19 16:39:41.089210 | orchestrator | Friday 19 September 2025 16:39:14 +0000 (0:00:00.278) 0:05:09.309 ****** 2025-09-19 16:39:41.089222 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:41.089232 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:41.089243 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:41.089254 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:41.089265 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:41.089275 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:41.089286 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:41.089318 | orchestrator | 2025-09-19 16:39:41.089330 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-19 16:39:41.089341 | orchestrator | Friday 19 September 2025 16:39:14 +0000 (0:00:00.293) 0:05:09.602 ****** 2025-09-19 16:39:41.089352 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.089362 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:41.089373 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:41.089383 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:41.089394 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:41.089404 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:41.089415 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:41.089427 | orchestrator | 2025-09-19 16:39:41.089440 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-19 16:39:41.089452 | orchestrator | Friday 19 September 2025 16:39:15 +0000 (0:00:00.286) 0:05:09.888 ****** 2025-09-19 16:39:41.089465 | orchestrator | ok: [testbed-manager] =>  2025-09-19 16:39:41.089478 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089489 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 16:39:41.089500 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089510 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 16:39:41.089521 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089531 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 16:39:41.089542 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089553 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 16:39:41.089563 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089574 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 16:39:41.089584 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089595 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 16:39:41.089606 | orchestrator |  docker_version: 5:27.5.1 2025-09-19 16:39:41.089617 | orchestrator | 2025-09-19 16:39:41.089627 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-19 16:39:41.089659 | orchestrator | Friday 19 September 2025 16:39:15 +0000 (0:00:00.258) 0:05:10.146 ****** 2025-09-19 16:39:41.089670 | orchestrator | ok: [testbed-manager] =>  2025-09-19 16:39:41.089681 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089691 | orchestrator | ok: [testbed-node-0] =>  2025-09-19 16:39:41.089702 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089712 | orchestrator | ok: [testbed-node-1] =>  2025-09-19 16:39:41.089723 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089733 | orchestrator | ok: [testbed-node-2] =>  2025-09-19 16:39:41.089744 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089754 | orchestrator | ok: [testbed-node-3] =>  2025-09-19 16:39:41.089765 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089775 | orchestrator | ok: [testbed-node-4] =>  2025-09-19 16:39:41.089786 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089797 | orchestrator | ok: [testbed-node-5] =>  2025-09-19 16:39:41.089807 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-19 16:39:41.089818 | orchestrator | 2025-09-19 16:39:41.089847 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-19 16:39:41.089859 | orchestrator | Friday 19 September 2025 16:39:15 +0000 (0:00:00.299) 0:05:10.446 ****** 2025-09-19 16:39:41.089869 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:41.089880 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:41.089891 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:41.089901 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:41.089912 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:41.089922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:41.089933 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:41.089943 | orchestrator | 2025-09-19 16:39:41.089954 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-19 16:39:41.089965 | orchestrator | Friday 19 September 2025 16:39:15 +0000 (0:00:00.262) 0:05:10.709 ****** 2025-09-19 16:39:41.089976 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:41.089996 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:41.090006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:41.090074 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:41.090087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:41.090098 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:41.090108 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:41.090119 | orchestrator | 2025-09-19 16:39:41.090130 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-19 16:39:41.090141 | orchestrator | Friday 19 September 2025 16:39:16 +0000 (0:00:00.298) 0:05:11.007 ****** 2025-09-19 16:39:41.090171 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:39:41.090186 | orchestrator | 2025-09-19 16:39:41.090197 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-19 16:39:41.090208 | orchestrator | Friday 19 September 2025 16:39:16 +0000 (0:00:00.409) 0:05:11.417 ****** 2025-09-19 16:39:41.090219 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.090230 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:41.090241 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:41.090252 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:41.090263 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:41.090274 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:41.090285 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:41.090295 | orchestrator | 2025-09-19 16:39:41.090306 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-19 16:39:41.090317 | orchestrator | Friday 19 September 2025 16:39:17 +0000 (0:00:00.916) 0:05:12.334 ****** 2025-09-19 16:39:41.090328 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.090339 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:39:41.090350 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:39:41.090361 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:39:41.090371 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:39:41.090382 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:39:41.090393 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:39:41.090404 | orchestrator | 2025-09-19 16:39:41.090415 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-19 16:39:41.090426 | orchestrator | Friday 19 September 2025 16:39:20 +0000 (0:00:03.143) 0:05:15.478 ****** 2025-09-19 16:39:41.090437 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-19 16:39:41.090448 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-19 16:39:41.090459 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-19 16:39:41.090470 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-19 16:39:41.090481 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-19 16:39:41.090492 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-19 16:39:41.090503 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:39:41.090514 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-19 16:39:41.090525 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-19 16:39:41.090536 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-19 16:39:41.090547 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:39:41.090563 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-19 16:39:41.090574 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-19 16:39:41.090585 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-19 16:39:41.090596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:39:41.090607 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-19 16:39:41.090618 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-19 16:39:41.090629 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-19 16:39:41.090678 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:39:41.090690 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-19 16:39:41.090701 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-19 16:39:41.090712 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-19 16:39:41.090723 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:39:41.090734 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:39:41.090744 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-19 16:39:41.090755 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-19 16:39:41.090766 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-19 16:39:41.090777 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:39:41.090788 | orchestrator | 2025-09-19 16:39:41.090798 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-19 16:39:41.090809 | orchestrator | Friday 19 September 2025 16:39:21 +0000 (0:00:00.642) 0:05:16.120 ****** 2025-09-19 16:39:41.090820 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.090832 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:41.090842 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:41.090853 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:41.090864 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:41.090875 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:41.090886 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:41.090896 | orchestrator | 2025-09-19 16:39:41.090907 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-19 16:39:41.090918 | orchestrator | Friday 19 September 2025 16:39:28 +0000 (0:00:06.980) 0:05:23.101 ****** 2025-09-19 16:39:41.090929 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:41.090940 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:41.090951 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.090962 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:41.090973 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:41.090984 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:41.090995 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:41.091006 | orchestrator | 2025-09-19 16:39:41.091017 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-19 16:39:41.091027 | orchestrator | Friday 19 September 2025 16:39:29 +0000 (0:00:01.242) 0:05:24.344 ****** 2025-09-19 16:39:41.091038 | orchestrator | ok: [testbed-manager] 2025-09-19 16:39:41.091049 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:41.091060 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:39:41.091071 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:41.091082 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:39:41.091093 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:39:41.091103 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:39:41.091114 | orchestrator | 2025-09-19 16:39:41.091125 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-19 16:39:41.091136 | orchestrator | Friday 19 September 2025 16:39:37 +0000 (0:00:07.950) 0:05:32.295 ****** 2025-09-19 16:39:41.091147 | orchestrator | changed: [testbed-manager] 2025-09-19 16:39:41.091158 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:39:41.091169 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:39:41.091187 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.272154 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.272262 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.272276 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.272289 | orchestrator | 2025-09-19 16:40:26.272301 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-19 16:40:26.272314 | orchestrator | Friday 19 September 2025 16:39:41 +0000 (0:00:03.525) 0:05:35.821 ****** 2025-09-19 16:40:26.272326 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.272338 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.272349 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.272382 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.272393 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.272404 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.272414 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.272425 | orchestrator | 2025-09-19 16:40:26.272436 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-19 16:40:26.272447 | orchestrator | Friday 19 September 2025 16:39:42 +0000 (0:00:01.338) 0:05:37.159 ****** 2025-09-19 16:40:26.272458 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.272468 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.272479 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.272489 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.272500 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.272510 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.272521 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.272534 | orchestrator | 2025-09-19 16:40:26.272554 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-19 16:40:26.272573 | orchestrator | Friday 19 September 2025 16:39:43 +0000 (0:00:01.318) 0:05:38.477 ****** 2025-09-19 16:40:26.272593 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.272613 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.272633 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.272788 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.272831 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.272845 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.272857 | orchestrator | changed: [testbed-manager] 2025-09-19 16:40:26.272886 | orchestrator | 2025-09-19 16:40:26.272909 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-19 16:40:26.272922 | orchestrator | Friday 19 September 2025 16:39:44 +0000 (0:00:00.793) 0:05:39.271 ****** 2025-09-19 16:40:26.272934 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.272948 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.272960 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.272990 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.273003 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.273015 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.273026 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.273039 | orchestrator | 2025-09-19 16:40:26.273050 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-19 16:40:26.273061 | orchestrator | Friday 19 September 2025 16:39:54 +0000 (0:00:09.977) 0:05:49.249 ****** 2025-09-19 16:40:26.273072 | orchestrator | changed: [testbed-manager] 2025-09-19 16:40:26.273082 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.273092 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.273103 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.273113 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.273124 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.273134 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.273145 | orchestrator | 2025-09-19 16:40:26.273155 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-19 16:40:26.273166 | orchestrator | Friday 19 September 2025 16:39:55 +0000 (0:00:00.950) 0:05:50.199 ****** 2025-09-19 16:40:26.273177 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.273187 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.273198 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.273209 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.273219 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.273230 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.273240 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.273251 | orchestrator | 2025-09-19 16:40:26.273262 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-19 16:40:26.273272 | orchestrator | Friday 19 September 2025 16:40:04 +0000 (0:00:09.545) 0:05:59.745 ****** 2025-09-19 16:40:26.273296 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.273307 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.273317 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.273328 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.273339 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.273349 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.273360 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.273370 | orchestrator | 2025-09-19 16:40:26.273381 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-19 16:40:26.273391 | orchestrator | Friday 19 September 2025 16:40:16 +0000 (0:00:11.084) 0:06:10.829 ****** 2025-09-19 16:40:26.273402 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-19 16:40:26.273414 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-19 16:40:26.273424 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-19 16:40:26.273435 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-19 16:40:26.273446 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-19 16:40:26.273456 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-19 16:40:26.273467 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-19 16:40:26.273477 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-19 16:40:26.273488 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-19 16:40:26.273499 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-19 16:40:26.273509 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-19 16:40:26.273520 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-19 16:40:26.273530 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-19 16:40:26.273541 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-19 16:40:26.273552 | orchestrator | 2025-09-19 16:40:26.273563 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-19 16:40:26.273595 | orchestrator | Friday 19 September 2025 16:40:17 +0000 (0:00:01.229) 0:06:12.058 ****** 2025-09-19 16:40:26.273606 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.273617 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.273628 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.273638 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.273649 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.273682 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.273693 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.273705 | orchestrator | 2025-09-19 16:40:26.273725 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-19 16:40:26.273744 | orchestrator | Friday 19 September 2025 16:40:17 +0000 (0:00:00.525) 0:06:12.584 ****** 2025-09-19 16:40:26.273766 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.273786 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:26.273802 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:26.273813 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:26.273823 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:26.273834 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:26.273844 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:26.273855 | orchestrator | 2025-09-19 16:40:26.273865 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-19 16:40:26.273878 | orchestrator | Friday 19 September 2025 16:40:22 +0000 (0:00:04.219) 0:06:16.804 ****** 2025-09-19 16:40:26.273888 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.273899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.273909 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.273920 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.273930 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.273940 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.273951 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.273970 | orchestrator | 2025-09-19 16:40:26.273982 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-19 16:40:26.273993 | orchestrator | Friday 19 September 2025 16:40:22 +0000 (0:00:00.547) 0:06:17.351 ****** 2025-09-19 16:40:26.274004 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-19 16:40:26.274073 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-19 16:40:26.274087 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.274098 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-19 16:40:26.274115 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-19 16:40:26.274126 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.274137 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-19 16:40:26.274148 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-19 16:40:26.274158 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.274169 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-19 16:40:26.274180 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-19 16:40:26.274190 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.274237 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-19 16:40:26.274249 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-19 16:40:26.274259 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.274270 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-19 16:40:26.274280 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-19 16:40:26.274291 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.274302 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-19 16:40:26.274312 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-19 16:40:26.274323 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.274334 | orchestrator | 2025-09-19 16:40:26.274345 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-19 16:40:26.274355 | orchestrator | Friday 19 September 2025 16:40:23 +0000 (0:00:00.703) 0:06:18.055 ****** 2025-09-19 16:40:26.274366 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.274377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.274387 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.274398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.274409 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.274419 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.274430 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.274440 | orchestrator | 2025-09-19 16:40:26.274451 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-19 16:40:26.274462 | orchestrator | Friday 19 September 2025 16:40:23 +0000 (0:00:00.499) 0:06:18.555 ****** 2025-09-19 16:40:26.274473 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.274483 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.274494 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.274504 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.274515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.274526 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.274536 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.274547 | orchestrator | 2025-09-19 16:40:26.274558 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-19 16:40:26.274568 | orchestrator | Friday 19 September 2025 16:40:24 +0000 (0:00:00.515) 0:06:19.070 ****** 2025-09-19 16:40:26.274579 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:26.274590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:26.274600 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:40:26.274611 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:40:26.274621 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:40:26.274639 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:40:26.274650 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:40:26.274718 | orchestrator | 2025-09-19 16:40:26.274729 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-19 16:40:26.274740 | orchestrator | Friday 19 September 2025 16:40:24 +0000 (0:00:00.495) 0:06:19.565 ****** 2025-09-19 16:40:26.274750 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:26.274771 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.959150 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.959273 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.959293 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.959308 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.959322 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.959337 | orchestrator | 2025-09-19 16:40:47.959353 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-19 16:40:47.959370 | orchestrator | Friday 19 September 2025 16:40:26 +0000 (0:00:01.439) 0:06:21.005 ****** 2025-09-19 16:40:47.959385 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:40:47.959401 | orchestrator | 2025-09-19 16:40:47.959415 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-19 16:40:47.959429 | orchestrator | Friday 19 September 2025 16:40:27 +0000 (0:00:00.991) 0:06:21.996 ****** 2025-09-19 16:40:47.959442 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.959455 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.959469 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.959482 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.959496 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.959509 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.959523 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.959538 | orchestrator | 2025-09-19 16:40:47.959551 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-19 16:40:47.959565 | orchestrator | Friday 19 September 2025 16:40:28 +0000 (0:00:00.879) 0:06:22.876 ****** 2025-09-19 16:40:47.959578 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.959592 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.959606 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.959620 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.959634 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.959650 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.959691 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.959706 | orchestrator | 2025-09-19 16:40:47.959721 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-19 16:40:47.959736 | orchestrator | Friday 19 September 2025 16:40:28 +0000 (0:00:00.832) 0:06:23.709 ****** 2025-09-19 16:40:47.959751 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.959766 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.959782 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.959796 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.959811 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.959825 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.959840 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.959855 | orchestrator | 2025-09-19 16:40:47.959870 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-19 16:40:47.959885 | orchestrator | Friday 19 September 2025 16:40:30 +0000 (0:00:01.403) 0:06:25.112 ****** 2025-09-19 16:40:47.959901 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:47.959915 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.959929 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.959943 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.959953 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.959961 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.959995 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.960004 | orchestrator | 2025-09-19 16:40:47.960013 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-19 16:40:47.960022 | orchestrator | Friday 19 September 2025 16:40:31 +0000 (0:00:01.534) 0:06:26.647 ****** 2025-09-19 16:40:47.960030 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.960039 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.960047 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.960055 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.960064 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.960072 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.960081 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.960089 | orchestrator | 2025-09-19 16:40:47.960098 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-19 16:40:47.960106 | orchestrator | Friday 19 September 2025 16:40:33 +0000 (0:00:01.358) 0:06:28.005 ****** 2025-09-19 16:40:47.960115 | orchestrator | changed: [testbed-manager] 2025-09-19 16:40:47.960123 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.960132 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.960140 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.960149 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.960157 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.960165 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.960174 | orchestrator | 2025-09-19 16:40:47.960182 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-19 16:40:47.960190 | orchestrator | Friday 19 September 2025 16:40:34 +0000 (0:00:01.393) 0:06:29.399 ****** 2025-09-19 16:40:47.960199 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:40:47.960209 | orchestrator | 2025-09-19 16:40:47.960217 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-19 16:40:47.960226 | orchestrator | Friday 19 September 2025 16:40:35 +0000 (0:00:00.987) 0:06:30.386 ****** 2025-09-19 16:40:47.960234 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.960243 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.960251 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.960260 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.960268 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.960277 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.960285 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.960294 | orchestrator | 2025-09-19 16:40:47.960302 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-19 16:40:47.960311 | orchestrator | Friday 19 September 2025 16:40:37 +0000 (0:00:01.391) 0:06:31.778 ****** 2025-09-19 16:40:47.960319 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.960328 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.960355 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.960364 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.960372 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.960381 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.960389 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.960397 | orchestrator | 2025-09-19 16:40:47.960406 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-19 16:40:47.960415 | orchestrator | Friday 19 September 2025 16:40:38 +0000 (0:00:01.263) 0:06:33.041 ****** 2025-09-19 16:40:47.960423 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.960431 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.960440 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.960448 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.960456 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.960465 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.960473 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.960481 | orchestrator | 2025-09-19 16:40:47.960490 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-19 16:40:47.960508 | orchestrator | Friday 19 September 2025 16:40:39 +0000 (0:00:01.101) 0:06:34.143 ****** 2025-09-19 16:40:47.960522 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.960537 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.960551 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.960565 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.960579 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:40:47.960592 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:40:47.960606 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:40:47.960622 | orchestrator | 2025-09-19 16:40:47.960637 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-19 16:40:47.960701 | orchestrator | Friday 19 September 2025 16:40:40 +0000 (0:00:01.124) 0:06:35.267 ****** 2025-09-19 16:40:47.960714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:40:47.960723 | orchestrator | 2025-09-19 16:40:47.960731 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960740 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:01.082) 0:06:36.349 ****** 2025-09-19 16:40:47.960748 | orchestrator | 2025-09-19 16:40:47.960757 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960765 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.038) 0:06:36.388 ****** 2025-09-19 16:40:47.960774 | orchestrator | 2025-09-19 16:40:47.960786 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960795 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.039) 0:06:36.427 ****** 2025-09-19 16:40:47.960804 | orchestrator | 2025-09-19 16:40:47.960812 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960821 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.045) 0:06:36.473 ****** 2025-09-19 16:40:47.960829 | orchestrator | 2025-09-19 16:40:47.960838 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960846 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.037) 0:06:36.510 ****** 2025-09-19 16:40:47.960855 | orchestrator | 2025-09-19 16:40:47.960863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960871 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.037) 0:06:36.548 ****** 2025-09-19 16:40:47.960880 | orchestrator | 2025-09-19 16:40:47.960888 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-19 16:40:47.960897 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.055) 0:06:36.604 ****** 2025-09-19 16:40:47.960905 | orchestrator | 2025-09-19 16:40:47.960914 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-19 16:40:47.960922 | orchestrator | Friday 19 September 2025 16:40:41 +0000 (0:00:00.038) 0:06:36.642 ****** 2025-09-19 16:40:47.960930 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:40:47.960939 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:40:47.960947 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:40:47.960956 | orchestrator | 2025-09-19 16:40:47.960964 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-19 16:40:47.960973 | orchestrator | Friday 19 September 2025 16:40:43 +0000 (0:00:01.126) 0:06:37.768 ****** 2025-09-19 16:40:47.960982 | orchestrator | changed: [testbed-manager] 2025-09-19 16:40:47.960990 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.960999 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.961007 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.961015 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.961024 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.961032 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.961040 | orchestrator | 2025-09-19 16:40:47.961049 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-19 16:40:47.961065 | orchestrator | Friday 19 September 2025 16:40:44 +0000 (0:00:01.374) 0:06:39.143 ****** 2025-09-19 16:40:47.961073 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:40:47.961082 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.961090 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.961099 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.961107 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:40:47.961116 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:40:47.961124 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:40:47.961133 | orchestrator | 2025-09-19 16:40:47.961141 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-19 16:40:47.961150 | orchestrator | Friday 19 September 2025 16:40:46 +0000 (0:00:02.411) 0:06:41.554 ****** 2025-09-19 16:40:47.961158 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:40:47.961167 | orchestrator | 2025-09-19 16:40:47.961175 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-19 16:40:47.961184 | orchestrator | Friday 19 September 2025 16:40:46 +0000 (0:00:00.101) 0:06:41.656 ****** 2025-09-19 16:40:47.961192 | orchestrator | ok: [testbed-manager] 2025-09-19 16:40:47.961200 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:40:47.961209 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:40:47.961217 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:40:47.961234 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:14.519306 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:14.519419 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:14.519435 | orchestrator | 2025-09-19 16:41:14.519449 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-19 16:41:14.519461 | orchestrator | Friday 19 September 2025 16:40:47 +0000 (0:00:01.032) 0:06:42.689 ****** 2025-09-19 16:41:14.519474 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.519485 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.519497 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.519508 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.519519 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.519530 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.519541 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.519551 | orchestrator | 2025-09-19 16:41:14.519563 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-19 16:41:14.519574 | orchestrator | Friday 19 September 2025 16:40:48 +0000 (0:00:00.551) 0:06:43.241 ****** 2025-09-19 16:41:14.519586 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:41:14.519600 | orchestrator | 2025-09-19 16:41:14.519611 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-19 16:41:14.519623 | orchestrator | Friday 19 September 2025 16:40:49 +0000 (0:00:01.081) 0:06:44.322 ****** 2025-09-19 16:41:14.519635 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.519647 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:14.519658 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:14.519727 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:14.519740 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:14.519751 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:14.519762 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:14.519772 | orchestrator | 2025-09-19 16:41:14.519783 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-19 16:41:14.519794 | orchestrator | Friday 19 September 2025 16:40:50 +0000 (0:00:00.842) 0:06:45.165 ****** 2025-09-19 16:41:14.519805 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-19 16:41:14.519816 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-19 16:41:14.519844 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-19 16:41:14.519880 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-19 16:41:14.519894 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-19 16:41:14.519906 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-19 16:41:14.519918 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-19 16:41:14.519931 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-19 16:41:14.519943 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-19 16:41:14.519955 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-19 16:41:14.519967 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-19 16:41:14.519979 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-19 16:41:14.519991 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-19 16:41:14.520003 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-19 16:41:14.520016 | orchestrator | 2025-09-19 16:41:14.520028 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-19 16:41:14.520041 | orchestrator | Friday 19 September 2025 16:40:53 +0000 (0:00:02.607) 0:06:47.773 ****** 2025-09-19 16:41:14.520053 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.520065 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.520077 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.520089 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.520101 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.520113 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.520125 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.520137 | orchestrator | 2025-09-19 16:41:14.520162 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-19 16:41:14.520184 | orchestrator | Friday 19 September 2025 16:40:53 +0000 (0:00:00.493) 0:06:48.266 ****** 2025-09-19 16:41:14.520197 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:41:14.520210 | orchestrator | 2025-09-19 16:41:14.520221 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-19 16:41:14.520231 | orchestrator | Friday 19 September 2025 16:40:54 +0000 (0:00:00.974) 0:06:49.240 ****** 2025-09-19 16:41:14.520242 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.520253 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:14.520263 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:14.520274 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:14.520284 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:14.520295 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:14.520305 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:14.520316 | orchestrator | 2025-09-19 16:41:14.520326 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-19 16:41:14.520337 | orchestrator | Friday 19 September 2025 16:40:55 +0000 (0:00:00.884) 0:06:50.125 ****** 2025-09-19 16:41:14.520348 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.520359 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:14.520369 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:14.520380 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:14.520391 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:14.520401 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:14.520412 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:14.520422 | orchestrator | 2025-09-19 16:41:14.520433 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-19 16:41:14.520461 | orchestrator | Friday 19 September 2025 16:40:56 +0000 (0:00:00.802) 0:06:50.928 ****** 2025-09-19 16:41:14.520472 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.520483 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.520493 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.520520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.520531 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.520541 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.520552 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.520562 | orchestrator | 2025-09-19 16:41:14.520573 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-19 16:41:14.520584 | orchestrator | Friday 19 September 2025 16:40:56 +0000 (0:00:00.466) 0:06:51.395 ****** 2025-09-19 16:41:14.520594 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.520605 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:14.520615 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:14.520626 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:14.520636 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:14.520647 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:14.520657 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:14.520686 | orchestrator | 2025-09-19 16:41:14.520698 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-19 16:41:14.520709 | orchestrator | Friday 19 September 2025 16:40:58 +0000 (0:00:01.660) 0:06:53.056 ****** 2025-09-19 16:41:14.520719 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.520730 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.520741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.520751 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.520762 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.520772 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.520783 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.520793 | orchestrator | 2025-09-19 16:41:14.520804 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-19 16:41:14.520815 | orchestrator | Friday 19 September 2025 16:40:58 +0000 (0:00:00.493) 0:06:53.549 ****** 2025-09-19 16:41:14.520825 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.520836 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:14.520846 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:14.520856 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:14.520867 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:14.520877 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:14.520888 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:14.520898 | orchestrator | 2025-09-19 16:41:14.520909 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-19 16:41:14.520926 | orchestrator | Friday 19 September 2025 16:41:06 +0000 (0:00:07.744) 0:07:01.294 ****** 2025-09-19 16:41:14.520937 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.520948 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:14.520958 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:14.520969 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:14.520979 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:14.520990 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:14.521000 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:14.521010 | orchestrator | 2025-09-19 16:41:14.521021 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-19 16:41:14.521031 | orchestrator | Friday 19 September 2025 16:41:07 +0000 (0:00:01.311) 0:07:02.605 ****** 2025-09-19 16:41:14.521042 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:14.521053 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:14.521063 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:14.521073 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:14.521084 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:14.521094 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.521105 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:14.521115 | orchestrator | 2025-09-19 16:41:14.521126 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-19 16:41:14.521136 | orchestrator | Friday 19 September 2025 16:41:10 +0000 (0:00:02.475) 0:07:05.081 ****** 2025-09-19 16:41:14.521147 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.521165 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:14.521176 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:14.521186 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:14.521196 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:14.521207 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:14.521218 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:14.521228 | orchestrator | 2025-09-19 16:41:14.521239 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 16:41:14.521249 | orchestrator | Friday 19 September 2025 16:41:12 +0000 (0:00:01.861) 0:07:06.943 ****** 2025-09-19 16:41:14.521260 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:14.521271 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:14.521281 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:14.521292 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:14.521302 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:14.521313 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:14.521324 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:14.521334 | orchestrator | 2025-09-19 16:41:14.521345 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 16:41:14.521356 | orchestrator | Friday 19 September 2025 16:41:13 +0000 (0:00:00.855) 0:07:07.798 ****** 2025-09-19 16:41:14.521366 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.521377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.521387 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.521398 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.521408 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.521419 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.521429 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.521440 | orchestrator | 2025-09-19 16:41:14.521450 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-19 16:41:14.521461 | orchestrator | Friday 19 September 2025 16:41:13 +0000 (0:00:00.945) 0:07:08.743 ****** 2025-09-19 16:41:14.521471 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:14.521482 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:14.521492 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:14.521502 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:14.521513 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:14.521524 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:14.521534 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:14.521545 | orchestrator | 2025-09-19 16:41:14.521562 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-19 16:41:46.616817 | orchestrator | Friday 19 September 2025 16:41:14 +0000 (0:00:00.504) 0:07:09.248 ****** 2025-09-19 16:41:46.616938 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.616956 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.616968 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.616978 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.616989 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617000 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617011 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617023 | orchestrator | 2025-09-19 16:41:46.617035 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-19 16:41:46.617046 | orchestrator | Friday 19 September 2025 16:41:15 +0000 (0:00:00.527) 0:07:09.776 ****** 2025-09-19 16:41:46.617056 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617067 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617078 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617088 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617099 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617109 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617120 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617130 | orchestrator | 2025-09-19 16:41:46.617141 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-19 16:41:46.617152 | orchestrator | Friday 19 September 2025 16:41:15 +0000 (0:00:00.517) 0:07:10.293 ****** 2025-09-19 16:41:46.617185 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617196 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617206 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617217 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617227 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617237 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617248 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617260 | orchestrator | 2025-09-19 16:41:46.617272 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-19 16:41:46.617290 | orchestrator | Friday 19 September 2025 16:41:16 +0000 (0:00:00.520) 0:07:10.814 ****** 2025-09-19 16:41:46.617308 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617326 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617346 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617366 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617385 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617398 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617410 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617421 | orchestrator | 2025-09-19 16:41:46.617433 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-19 16:41:46.617461 | orchestrator | Friday 19 September 2025 16:41:22 +0000 (0:00:05.971) 0:07:16.786 ****** 2025-09-19 16:41:46.617473 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:46.617486 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:46.617499 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:46.617511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:46.617523 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:46.617535 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:46.617547 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:46.617559 | orchestrator | 2025-09-19 16:41:46.617572 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-19 16:41:46.617584 | orchestrator | Friday 19 September 2025 16:41:22 +0000 (0:00:00.598) 0:07:17.384 ****** 2025-09-19 16:41:46.617597 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:41:46.617612 | orchestrator | 2025-09-19 16:41:46.617623 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-19 16:41:46.617633 | orchestrator | Friday 19 September 2025 16:41:23 +0000 (0:00:00.787) 0:07:18.171 ****** 2025-09-19 16:41:46.617644 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617654 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617665 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617676 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617707 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617718 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617728 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617739 | orchestrator | 2025-09-19 16:41:46.617750 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-19 16:41:46.617760 | orchestrator | Friday 19 September 2025 16:41:25 +0000 (0:00:01.969) 0:07:20.140 ****** 2025-09-19 16:41:46.617777 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617796 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617817 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617836 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617847 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617858 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617868 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617878 | orchestrator | 2025-09-19 16:41:46.617889 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-19 16:41:46.617900 | orchestrator | Friday 19 September 2025 16:41:26 +0000 (0:00:01.137) 0:07:21.278 ****** 2025-09-19 16:41:46.617911 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.617921 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.617941 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.617952 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.617963 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.617973 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.617984 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.617994 | orchestrator | 2025-09-19 16:41:46.618005 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-19 16:41:46.618080 | orchestrator | Friday 19 September 2025 16:41:27 +0000 (0:00:00.849) 0:07:22.127 ****** 2025-09-19 16:41:46.618093 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618106 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618117 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618180 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618194 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618205 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618216 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-19 16:41:46.618227 | orchestrator | 2025-09-19 16:41:46.618238 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-19 16:41:46.618249 | orchestrator | Friday 19 September 2025 16:41:29 +0000 (0:00:01.765) 0:07:23.893 ****** 2025-09-19 16:41:46.618260 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:41:46.618272 | orchestrator | 2025-09-19 16:41:46.618282 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-19 16:41:46.618301 | orchestrator | Friday 19 September 2025 16:41:30 +0000 (0:00:01.024) 0:07:24.917 ****** 2025-09-19 16:41:46.618321 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:46.618343 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:46.618362 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:46.618373 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:46.618384 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:46.618395 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:46.618405 | orchestrator | changed: [testbed-manager] 2025-09-19 16:41:46.618416 | orchestrator | 2025-09-19 16:41:46.618426 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-19 16:41:46.618437 | orchestrator | Friday 19 September 2025 16:41:38 +0000 (0:00:08.690) 0:07:33.608 ****** 2025-09-19 16:41:46.618448 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.618459 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.618470 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.618480 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.618491 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.618501 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.618512 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.618523 | orchestrator | 2025-09-19 16:41:46.618533 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-19 16:41:46.618544 | orchestrator | Friday 19 September 2025 16:41:40 +0000 (0:00:01.856) 0:07:35.464 ****** 2025-09-19 16:41:46.618555 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.618565 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.618585 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.618596 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.618606 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.618617 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.618627 | orchestrator | 2025-09-19 16:41:46.618638 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-19 16:41:46.618649 | orchestrator | Friday 19 September 2025 16:41:41 +0000 (0:00:01.277) 0:07:36.742 ****** 2025-09-19 16:41:46.618659 | orchestrator | changed: [testbed-manager] 2025-09-19 16:41:46.618670 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:46.618720 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:46.618738 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:46.618756 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:46.618790 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:46.618807 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:46.618817 | orchestrator | 2025-09-19 16:41:46.618828 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-19 16:41:46.618839 | orchestrator | 2025-09-19 16:41:46.618850 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-19 16:41:46.618860 | orchestrator | Friday 19 September 2025 16:41:43 +0000 (0:00:01.207) 0:07:37.950 ****** 2025-09-19 16:41:46.618871 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:41:46.618881 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:41:46.618892 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:41:46.618903 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:41:46.618913 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:41:46.618924 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:41:46.618934 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:41:46.618944 | orchestrator | 2025-09-19 16:41:46.618955 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-19 16:41:46.618966 | orchestrator | 2025-09-19 16:41:46.618977 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-19 16:41:46.618987 | orchestrator | Friday 19 September 2025 16:41:43 +0000 (0:00:00.511) 0:07:38.461 ****** 2025-09-19 16:41:46.618998 | orchestrator | changed: [testbed-manager] 2025-09-19 16:41:46.619008 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:41:46.619019 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:41:46.619030 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:41:46.619040 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:41:46.619050 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:41:46.619061 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:41:46.619071 | orchestrator | 2025-09-19 16:41:46.619082 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-19 16:41:46.619093 | orchestrator | Friday 19 September 2025 16:41:44 +0000 (0:00:01.246) 0:07:39.707 ****** 2025-09-19 16:41:46.619104 | orchestrator | ok: [testbed-manager] 2025-09-19 16:41:46.619114 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:41:46.619125 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:41:46.619135 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:41:46.619146 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:41:46.619156 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:41:46.619167 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:41:46.619177 | orchestrator | 2025-09-19 16:41:46.619188 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-19 16:41:46.619207 | orchestrator | Friday 19 September 2025 16:41:46 +0000 (0:00:01.633) 0:07:41.340 ****** 2025-09-19 16:42:08.133207 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:42:08.133313 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:42:08.133327 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:42:08.133340 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:42:08.133351 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:42:08.133362 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:42:08.133373 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:42:08.133408 | orchestrator | 2025-09-19 16:42:08.133421 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-19 16:42:08.133433 | orchestrator | Friday 19 September 2025 16:41:47 +0000 (0:00:00.484) 0:07:41.825 ****** 2025-09-19 16:42:08.133445 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:08.133457 | orchestrator | 2025-09-19 16:42:08.133468 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-19 16:42:08.133479 | orchestrator | Friday 19 September 2025 16:41:48 +0000 (0:00:00.982) 0:07:42.808 ****** 2025-09-19 16:42:08.133491 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:08.133504 | orchestrator | 2025-09-19 16:42:08.133516 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-19 16:42:08.133526 | orchestrator | Friday 19 September 2025 16:41:48 +0000 (0:00:00.833) 0:07:43.641 ****** 2025-09-19 16:42:08.133537 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.133548 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.133559 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.133570 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.133580 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.133591 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.133601 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.133612 | orchestrator | 2025-09-19 16:42:08.133668 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-19 16:42:08.133723 | orchestrator | Friday 19 September 2025 16:41:56 +0000 (0:00:07.693) 0:07:51.335 ****** 2025-09-19 16:42:08.133736 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.133748 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.133761 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.133773 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.133784 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.133798 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.133809 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.133822 | orchestrator | 2025-09-19 16:42:08.133834 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-19 16:42:08.133847 | orchestrator | Friday 19 September 2025 16:41:57 +0000 (0:00:00.749) 0:07:52.084 ****** 2025-09-19 16:42:08.133859 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.133871 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.133883 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.133896 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.133908 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.133920 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.133932 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.133945 | orchestrator | 2025-09-19 16:42:08.133957 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-19 16:42:08.133970 | orchestrator | Friday 19 September 2025 16:41:58 +0000 (0:00:01.479) 0:07:53.564 ****** 2025-09-19 16:42:08.133983 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.133995 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.134007 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.134078 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.134091 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.134103 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.134115 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.134126 | orchestrator | 2025-09-19 16:42:08.134137 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-19 16:42:08.134148 | orchestrator | Friday 19 September 2025 16:42:00 +0000 (0:00:01.633) 0:07:55.198 ****** 2025-09-19 16:42:08.134159 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.134180 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.134191 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.134202 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.134212 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.134223 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.134233 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.134244 | orchestrator | 2025-09-19 16:42:08.134255 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-19 16:42:08.134266 | orchestrator | Friday 19 September 2025 16:42:01 +0000 (0:00:01.162) 0:07:56.361 ****** 2025-09-19 16:42:08.134277 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.134287 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.134298 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.134309 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.134319 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.134330 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.134340 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.134351 | orchestrator | 2025-09-19 16:42:08.134362 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-19 16:42:08.134373 | orchestrator | 2025-09-19 16:42:08.134383 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-19 16:42:08.134394 | orchestrator | Friday 19 September 2025 16:42:02 +0000 (0:00:01.189) 0:07:57.550 ****** 2025-09-19 16:42:08.134405 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:08.134416 | orchestrator | 2025-09-19 16:42:08.134427 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 16:42:08.134454 | orchestrator | Friday 19 September 2025 16:42:03 +0000 (0:00:00.712) 0:07:58.262 ****** 2025-09-19 16:42:08.134466 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:08.134478 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:08.134488 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:08.134499 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:08.134510 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:08.134521 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:08.134531 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:08.134542 | orchestrator | 2025-09-19 16:42:08.134553 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 16:42:08.134564 | orchestrator | Friday 19 September 2025 16:42:04 +0000 (0:00:00.793) 0:07:59.056 ****** 2025-09-19 16:42:08.134574 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.134585 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.134596 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.134606 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.134617 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.134628 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.134638 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.134649 | orchestrator | 2025-09-19 16:42:08.134660 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-19 16:42:08.134671 | orchestrator | Friday 19 September 2025 16:42:05 +0000 (0:00:01.156) 0:08:00.213 ****** 2025-09-19 16:42:08.134682 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:08.134708 | orchestrator | 2025-09-19 16:42:08.134720 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-19 16:42:08.134730 | orchestrator | Friday 19 September 2025 16:42:06 +0000 (0:00:00.708) 0:08:00.922 ****** 2025-09-19 16:42:08.134741 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:08.134752 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:08.134762 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:08.134773 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:08.134783 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:08.134801 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:08.134812 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:08.134822 | orchestrator | 2025-09-19 16:42:08.134833 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-19 16:42:08.134844 | orchestrator | Friday 19 September 2025 16:42:06 +0000 (0:00:00.762) 0:08:01.684 ****** 2025-09-19 16:42:08.134855 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:08.134866 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:08.134876 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:08.134887 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:08.134898 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:08.134908 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:08.134919 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:08.134929 | orchestrator | 2025-09-19 16:42:08.134940 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:42:08.134952 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-19 16:42:08.134963 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-19 16:42:08.134975 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 16:42:08.134986 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 16:42:08.134997 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 16:42:08.135007 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 16:42:08.135018 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-19 16:42:08.135029 | orchestrator | 2025-09-19 16:42:08.135039 | orchestrator | 2025-09-19 16:42:08.135050 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:42:08.135061 | orchestrator | Friday 19 September 2025 16:42:08 +0000 (0:00:01.170) 0:08:02.854 ****** 2025-09-19 16:42:08.135072 | orchestrator | =============================================================================== 2025-09-19 16:42:08.135083 | orchestrator | osism.commons.packages : Install required packages --------------------- 77.30s 2025-09-19 16:42:08.135094 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.42s 2025-09-19 16:42:08.135104 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.94s 2025-09-19 16:42:08.135115 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.17s 2025-09-19 16:42:08.135126 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 12.47s 2025-09-19 16:42:08.135137 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.37s 2025-09-19 16:42:08.135148 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.08s 2025-09-19 16:42:08.135159 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.98s 2025-09-19 16:42:08.135170 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.55s 2025-09-19 16:42:08.135180 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 8.69s 2025-09-19 16:42:08.135197 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.64s 2025-09-19 16:42:08.588751 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.49s 2025-09-19 16:42:08.588851 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.95s 2025-09-19 16:42:08.588891 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.85s 2025-09-19 16:42:08.588902 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.74s 2025-09-19 16:42:08.588913 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.69s 2025-09-19 16:42:08.588924 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.98s 2025-09-19 16:42:08.588935 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.56s 2025-09-19 16:42:08.588945 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.97s 2025-09-19 16:42:08.588956 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.68s 2025-09-19 16:42:08.861296 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 16:42:08.861390 | orchestrator | + osism apply network 2025-09-19 16:42:21.434875 | orchestrator | 2025-09-19 16:42:21 | INFO  | Task 6326a16b-278c-45b0-b8f6-d58ac783f0ea (network) was prepared for execution. 2025-09-19 16:42:21.435007 | orchestrator | 2025-09-19 16:42:21 | INFO  | It takes a moment until task 6326a16b-278c-45b0-b8f6-d58ac783f0ea (network) has been started and output is visible here. 2025-09-19 16:42:49.052979 | orchestrator | 2025-09-19 16:42:49.053084 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-19 16:42:49.053098 | orchestrator | 2025-09-19 16:42:49.053108 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-19 16:42:49.053119 | orchestrator | Friday 19 September 2025 16:42:25 +0000 (0:00:00.266) 0:00:00.266 ****** 2025-09-19 16:42:49.053129 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.053140 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.053150 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.053161 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.053170 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.053195 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.053205 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.053215 | orchestrator | 2025-09-19 16:42:49.053225 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-19 16:42:49.053235 | orchestrator | Friday 19 September 2025 16:42:26 +0000 (0:00:00.679) 0:00:00.946 ****** 2025-09-19 16:42:49.053246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:49.053258 | orchestrator | 2025-09-19 16:42:49.053268 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-19 16:42:49.053278 | orchestrator | Friday 19 September 2025 16:42:27 +0000 (0:00:01.160) 0:00:02.107 ****** 2025-09-19 16:42:49.053288 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.053297 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.053307 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.053317 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.053326 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.053336 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.053345 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.053355 | orchestrator | 2025-09-19 16:42:49.053365 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-19 16:42:49.053375 | orchestrator | Friday 19 September 2025 16:42:29 +0000 (0:00:02.021) 0:00:04.129 ****** 2025-09-19 16:42:49.053384 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.053394 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.053403 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.053413 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.053423 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.053432 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.053442 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.053451 | orchestrator | 2025-09-19 16:42:49.053461 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-19 16:42:49.053492 | orchestrator | Friday 19 September 2025 16:42:31 +0000 (0:00:01.706) 0:00:05.835 ****** 2025-09-19 16:42:49.053503 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-19 16:42:49.053515 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-19 16:42:49.053526 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-19 16:42:49.053537 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-19 16:42:49.053548 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-19 16:42:49.053559 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-19 16:42:49.053570 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-19 16:42:49.053580 | orchestrator | 2025-09-19 16:42:49.053593 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-19 16:42:49.053604 | orchestrator | Friday 19 September 2025 16:42:32 +0000 (0:00:00.970) 0:00:06.806 ****** 2025-09-19 16:42:49.053615 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 16:42:49.053627 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:42:49.053638 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 16:42:49.053649 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 16:42:49.053661 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 16:42:49.053672 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 16:42:49.053683 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 16:42:49.053693 | orchestrator | 2025-09-19 16:42:49.053724 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-19 16:42:49.053734 | orchestrator | Friday 19 September 2025 16:42:35 +0000 (0:00:03.124) 0:00:09.930 ****** 2025-09-19 16:42:49.053744 | orchestrator | changed: [testbed-manager] 2025-09-19 16:42:49.053754 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:49.053763 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:49.053773 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:49.053783 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:49.053792 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:49.053801 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:49.053811 | orchestrator | 2025-09-19 16:42:49.053820 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-19 16:42:49.053830 | orchestrator | Friday 19 September 2025 16:42:36 +0000 (0:00:01.355) 0:00:11.285 ****** 2025-09-19 16:42:49.053839 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:42:49.053848 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 16:42:49.053858 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 16:42:49.053867 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 16:42:49.053876 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 16:42:49.053886 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 16:42:49.053895 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 16:42:49.053905 | orchestrator | 2025-09-19 16:42:49.053914 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-19 16:42:49.053924 | orchestrator | Friday 19 September 2025 16:42:38 +0000 (0:00:01.575) 0:00:12.861 ****** 2025-09-19 16:42:49.053933 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.053943 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.053952 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.053961 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.053971 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.053980 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.053989 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.053999 | orchestrator | 2025-09-19 16:42:49.054008 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-19 16:42:49.054089 | orchestrator | Friday 19 September 2025 16:42:39 +0000 (0:00:01.090) 0:00:13.951 ****** 2025-09-19 16:42:49.054100 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:42:49.054110 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:42:49.054120 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:42:49.054138 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:42:49.054148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:42:49.054187 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:42:49.054198 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:42:49.054208 | orchestrator | 2025-09-19 16:42:49.054218 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-19 16:42:49.054233 | orchestrator | Friday 19 September 2025 16:42:40 +0000 (0:00:00.641) 0:00:14.593 ****** 2025-09-19 16:42:49.054243 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.054252 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.054262 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.054272 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.054281 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.054290 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.054300 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.054309 | orchestrator | 2025-09-19 16:42:49.054319 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-19 16:42:49.054329 | orchestrator | Friday 19 September 2025 16:42:42 +0000 (0:00:02.159) 0:00:16.753 ****** 2025-09-19 16:42:49.054338 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:42:49.054348 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:42:49.054358 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:42:49.054367 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:42:49.054377 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:42:49.054386 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:42:49.054397 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-19 16:42:49.054408 | orchestrator | 2025-09-19 16:42:49.054418 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-19 16:42:49.054428 | orchestrator | Friday 19 September 2025 16:42:43 +0000 (0:00:00.937) 0:00:17.690 ****** 2025-09-19 16:42:49.054437 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.054447 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:42:49.054457 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:42:49.054466 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:42:49.054476 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:42:49.054485 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:42:49.054495 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:42:49.054504 | orchestrator | 2025-09-19 16:42:49.054514 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-19 16:42:49.054524 | orchestrator | Friday 19 September 2025 16:42:44 +0000 (0:00:01.679) 0:00:19.370 ****** 2025-09-19 16:42:49.054534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:42:49.054545 | orchestrator | 2025-09-19 16:42:49.054555 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 16:42:49.054565 | orchestrator | Friday 19 September 2025 16:42:45 +0000 (0:00:01.203) 0:00:20.574 ****** 2025-09-19 16:42:49.054574 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.054584 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.054594 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.054603 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.054613 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.054622 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.054632 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.054641 | orchestrator | 2025-09-19 16:42:49.054651 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-19 16:42:49.054661 | orchestrator | Friday 19 September 2025 16:42:47 +0000 (0:00:01.021) 0:00:21.595 ****** 2025-09-19 16:42:49.054671 | orchestrator | ok: [testbed-manager] 2025-09-19 16:42:49.054680 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:42:49.054690 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:42:49.054734 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:42:49.054744 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:42:49.054753 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:42:49.054763 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:42:49.054772 | orchestrator | 2025-09-19 16:42:49.054782 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 16:42:49.054791 | orchestrator | Friday 19 September 2025 16:42:47 +0000 (0:00:00.820) 0:00:22.415 ****** 2025-09-19 16:42:49.054801 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054811 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054820 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054830 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054839 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054848 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054858 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054867 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054876 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-19 16:42:49.054886 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054895 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054905 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054914 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054924 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-19 16:42:49.054933 | orchestrator | 2025-09-19 16:42:49.054949 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-19 16:43:05.707356 | orchestrator | Friday 19 September 2025 16:42:49 +0000 (0:00:01.191) 0:00:23.607 ****** 2025-09-19 16:43:05.707475 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:43:05.707492 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:43:05.707504 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:43:05.707515 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:43:05.707526 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:43:05.707536 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:43:05.707548 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:43:05.707559 | orchestrator | 2025-09-19 16:43:05.707588 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-19 16:43:05.707599 | orchestrator | Friday 19 September 2025 16:42:49 +0000 (0:00:00.622) 0:00:24.230 ****** 2025-09-19 16:43:05.707612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-2, testbed-node-3, testbed-node-5, testbed-node-4 2025-09-19 16:43:05.707625 | orchestrator | 2025-09-19 16:43:05.707636 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-19 16:43:05.707647 | orchestrator | Friday 19 September 2025 16:42:54 +0000 (0:00:04.710) 0:00:28.940 ****** 2025-09-19 16:43:05.707659 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707780 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707791 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707813 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707881 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707894 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707909 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.707921 | orchestrator | 2025-09-19 16:43:05.707934 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-19 16:43:05.707946 | orchestrator | Friday 19 September 2025 16:43:00 +0000 (0:00:05.745) 0:00:34.685 ****** 2025-09-19 16:43:05.707959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707981 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.707994 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.708006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.708019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.708031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.708044 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.708057 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.708070 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-19 16:43:05.708082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.708102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.708116 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:05.708135 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:11.308602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-19 16:43:11.308761 | orchestrator | 2025-09-19 16:43:11.308781 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-19 16:43:11.308793 | orchestrator | Friday 19 September 2025 16:43:05 +0000 (0:00:05.576) 0:00:40.261 ****** 2025-09-19 16:43:11.308827 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:43:11.308839 | orchestrator | 2025-09-19 16:43:11.308850 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-19 16:43:11.308861 | orchestrator | Friday 19 September 2025 16:43:06 +0000 (0:00:01.020) 0:00:41.282 ****** 2025-09-19 16:43:11.308872 | orchestrator | ok: [testbed-manager] 2025-09-19 16:43:11.308884 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:43:11.308894 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:43:11.308905 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:43:11.308915 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:43:11.308925 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:43:11.308936 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:43:11.308947 | orchestrator | 2025-09-19 16:43:11.308957 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-19 16:43:11.308968 | orchestrator | Friday 19 September 2025 16:43:07 +0000 (0:00:00.937) 0:00:42.219 ****** 2025-09-19 16:43:11.308979 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.308990 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309000 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309011 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309021 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309032 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309042 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309053 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309063 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:43:11.309075 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309085 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309096 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309106 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309117 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:43:11.309130 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309142 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309154 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309166 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:43:11.309190 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309202 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309214 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309227 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309239 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:43:11.309251 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309262 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309274 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309293 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:43:11.309306 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309319 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:43:11.309331 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-19 16:43:11.309343 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-19 16:43:11.309355 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-19 16:43:11.309367 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-19 16:43:11.309379 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:43:11.309391 | orchestrator | 2025-09-19 16:43:11.309403 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-19 16:43:11.309432 | orchestrator | Friday 19 September 2025 16:43:09 +0000 (0:00:01.968) 0:00:44.188 ****** 2025-09-19 16:43:11.309444 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:43:11.309456 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:43:11.309469 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:43:11.309481 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:43:11.309491 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:43:11.309508 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:43:11.309519 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:43:11.309530 | orchestrator | 2025-09-19 16:43:11.309540 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-19 16:43:11.309551 | orchestrator | Friday 19 September 2025 16:43:10 +0000 (0:00:00.610) 0:00:44.798 ****** 2025-09-19 16:43:11.309562 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:43:11.309572 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:43:11.309583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:43:11.309593 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:43:11.309604 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:43:11.309614 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:43:11.309624 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:43:11.309635 | orchestrator | 2025-09-19 16:43:11.309645 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:43:11.309658 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 16:43:11.309669 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309680 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309691 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309701 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309784 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309796 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 16:43:11.309806 | orchestrator | 2025-09-19 16:43:11.309817 | orchestrator | 2025-09-19 16:43:11.309828 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:43:11.309838 | orchestrator | Friday 19 September 2025 16:43:10 +0000 (0:00:00.716) 0:00:45.514 ****** 2025-09-19 16:43:11.309849 | orchestrator | =============================================================================== 2025-09-19 16:43:11.309868 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.75s 2025-09-19 16:43:11.309878 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.58s 2025-09-19 16:43:11.309889 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.71s 2025-09-19 16:43:11.309899 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.12s 2025-09-19 16:43:11.309910 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.16s 2025-09-19 16:43:11.309920 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2025-09-19 16:43:11.309931 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.97s 2025-09-19 16:43:11.309942 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.71s 2025-09-19 16:43:11.309952 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.68s 2025-09-19 16:43:11.309962 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.58s 2025-09-19 16:43:11.309973 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.36s 2025-09-19 16:43:11.309984 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.20s 2025-09-19 16:43:11.309994 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2025-09-19 16:43:11.310005 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.16s 2025-09-19 16:43:11.310015 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-09-19 16:43:11.310083 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.02s 2025-09-19 16:43:11.310095 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.02s 2025-09-19 16:43:11.310105 | orchestrator | osism.commons.network : Create required directories --------------------- 0.97s 2025-09-19 16:43:11.310115 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2025-09-19 16:43:11.310126 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-09-19 16:43:11.595793 | orchestrator | + osism apply wireguard 2025-09-19 16:43:23.568381 | orchestrator | 2025-09-19 16:43:23 | INFO  | Task 54cf8c4f-5f06-4356-aed5-310f2f528dd3 (wireguard) was prepared for execution. 2025-09-19 16:43:23.568485 | orchestrator | 2025-09-19 16:43:23 | INFO  | It takes a moment until task 54cf8c4f-5f06-4356-aed5-310f2f528dd3 (wireguard) has been started and output is visible here. 2025-09-19 16:43:42.886777 | orchestrator | 2025-09-19 16:43:42.886867 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-19 16:43:42.886877 | orchestrator | 2025-09-19 16:43:42.886884 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-19 16:43:42.886905 | orchestrator | Friday 19 September 2025 16:43:27 +0000 (0:00:00.219) 0:00:00.219 ****** 2025-09-19 16:43:42.886912 | orchestrator | ok: [testbed-manager] 2025-09-19 16:43:42.886920 | orchestrator | 2025-09-19 16:43:42.886926 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-19 16:43:42.886933 | orchestrator | Friday 19 September 2025 16:43:29 +0000 (0:00:01.551) 0:00:01.771 ****** 2025-09-19 16:43:42.886939 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.886947 | orchestrator | 2025-09-19 16:43:42.886964 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-19 16:43:42.886971 | orchestrator | Friday 19 September 2025 16:43:35 +0000 (0:00:06.355) 0:00:08.127 ****** 2025-09-19 16:43:42.886983 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.886990 | orchestrator | 2025-09-19 16:43:42.886996 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-19 16:43:42.887002 | orchestrator | Friday 19 September 2025 16:43:35 +0000 (0:00:00.559) 0:00:08.686 ****** 2025-09-19 16:43:42.887008 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.887032 | orchestrator | 2025-09-19 16:43:42.887039 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-19 16:43:42.887045 | orchestrator | Friday 19 September 2025 16:43:36 +0000 (0:00:00.412) 0:00:09.099 ****** 2025-09-19 16:43:42.887052 | orchestrator | ok: [testbed-manager] 2025-09-19 16:43:42.887058 | orchestrator | 2025-09-19 16:43:42.887064 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-19 16:43:42.887070 | orchestrator | Friday 19 September 2025 16:43:36 +0000 (0:00:00.535) 0:00:09.634 ****** 2025-09-19 16:43:42.887076 | orchestrator | ok: [testbed-manager] 2025-09-19 16:43:42.887082 | orchestrator | 2025-09-19 16:43:42.887088 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-19 16:43:42.887094 | orchestrator | Friday 19 September 2025 16:43:37 +0000 (0:00:00.543) 0:00:10.178 ****** 2025-09-19 16:43:42.887100 | orchestrator | ok: [testbed-manager] 2025-09-19 16:43:42.887106 | orchestrator | 2025-09-19 16:43:42.887112 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-19 16:43:42.887118 | orchestrator | Friday 19 September 2025 16:43:37 +0000 (0:00:00.407) 0:00:10.585 ****** 2025-09-19 16:43:42.887124 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.887130 | orchestrator | 2025-09-19 16:43:42.887136 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-19 16:43:42.887142 | orchestrator | Friday 19 September 2025 16:43:39 +0000 (0:00:01.169) 0:00:11.755 ****** 2025-09-19 16:43:42.887148 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-19 16:43:42.887154 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.887161 | orchestrator | 2025-09-19 16:43:42.887167 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-19 16:43:42.887173 | orchestrator | Friday 19 September 2025 16:43:39 +0000 (0:00:00.932) 0:00:12.687 ****** 2025-09-19 16:43:42.887179 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.887185 | orchestrator | 2025-09-19 16:43:42.887191 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-19 16:43:42.887197 | orchestrator | Friday 19 September 2025 16:43:41 +0000 (0:00:01.692) 0:00:14.380 ****** 2025-09-19 16:43:42.887203 | orchestrator | changed: [testbed-manager] 2025-09-19 16:43:42.887209 | orchestrator | 2025-09-19 16:43:42.887215 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:43:42.887221 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:43:42.887229 | orchestrator | 2025-09-19 16:43:42.887235 | orchestrator | 2025-09-19 16:43:42.887241 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:43:42.887247 | orchestrator | Friday 19 September 2025 16:43:42 +0000 (0:00:00.937) 0:00:15.317 ****** 2025-09-19 16:43:42.887253 | orchestrator | =============================================================================== 2025-09-19 16:43:42.887259 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.36s 2025-09-19 16:43:42.887265 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-19 16:43:42.887271 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.55s 2025-09-19 16:43:42.887277 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.17s 2025-09-19 16:43:42.887283 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.94s 2025-09-19 16:43:42.887289 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.93s 2025-09-19 16:43:42.887295 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2025-09-19 16:43:42.887302 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-09-19 16:43:42.887309 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2025-09-19 16:43:42.887316 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-09-19 16:43:42.887328 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-09-19 16:43:43.166840 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-19 16:43:43.202005 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-19 16:43:43.202177 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-19 16:43:43.276985 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 200 0 --:--:-- --:--:-- --:--:-- 202 2025-09-19 16:43:43.290194 | orchestrator | + osism apply --environment custom workarounds 2025-09-19 16:43:45.182885 | orchestrator | 2025-09-19 16:43:45 | INFO  | Trying to run play workarounds in environment custom 2025-09-19 16:43:55.326304 | orchestrator | 2025-09-19 16:43:55 | INFO  | Task fb4ca81c-81d5-4c9a-9152-bc3cd0f59820 (workarounds) was prepared for execution. 2025-09-19 16:43:55.326396 | orchestrator | 2025-09-19 16:43:55 | INFO  | It takes a moment until task fb4ca81c-81d5-4c9a-9152-bc3cd0f59820 (workarounds) has been started and output is visible here. 2025-09-19 16:44:20.631579 | orchestrator | 2025-09-19 16:44:20.631871 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:44:20.631892 | orchestrator | 2025-09-19 16:44:20.631904 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-19 16:44:20.631916 | orchestrator | Friday 19 September 2025 16:43:59 +0000 (0:00:00.148) 0:00:00.148 ****** 2025-09-19 16:44:20.631927 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631939 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631950 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631961 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631971 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631982 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-19 16:44:20.631993 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-19 16:44:20.632004 | orchestrator | 2025-09-19 16:44:20.632015 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-19 16:44:20.632026 | orchestrator | 2025-09-19 16:44:20.632037 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 16:44:20.632048 | orchestrator | Friday 19 September 2025 16:43:59 +0000 (0:00:00.757) 0:00:00.905 ****** 2025-09-19 16:44:20.632059 | orchestrator | ok: [testbed-manager] 2025-09-19 16:44:20.632071 | orchestrator | 2025-09-19 16:44:20.632082 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-19 16:44:20.632093 | orchestrator | 2025-09-19 16:44:20.632104 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-19 16:44:20.632116 | orchestrator | Friday 19 September 2025 16:44:02 +0000 (0:00:02.404) 0:00:03.310 ****** 2025-09-19 16:44:20.632129 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:44:20.632141 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:44:20.632153 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:44:20.632165 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:44:20.632177 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:44:20.632189 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:44:20.632202 | orchestrator | 2025-09-19 16:44:20.632215 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-19 16:44:20.632227 | orchestrator | 2025-09-19 16:44:20.632239 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-19 16:44:20.632251 | orchestrator | Friday 19 September 2025 16:44:04 +0000 (0:00:01.841) 0:00:05.151 ****** 2025-09-19 16:44:20.632264 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632278 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632309 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632322 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632334 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632347 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-19 16:44:20.632359 | orchestrator | 2025-09-19 16:44:20.632371 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-19 16:44:20.632383 | orchestrator | Friday 19 September 2025 16:44:05 +0000 (0:00:01.484) 0:00:06.636 ****** 2025-09-19 16:44:20.632396 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:44:20.632408 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:44:20.632421 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:44:20.632433 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:44:20.632445 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:44:20.632457 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:44:20.632468 | orchestrator | 2025-09-19 16:44:20.632479 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-19 16:44:20.632490 | orchestrator | Friday 19 September 2025 16:44:09 +0000 (0:00:03.836) 0:00:10.472 ****** 2025-09-19 16:44:20.632501 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:44:20.632512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:44:20.632523 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:44:20.632533 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:44:20.632544 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:44:20.632555 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:44:20.632566 | orchestrator | 2025-09-19 16:44:20.632577 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-19 16:44:20.632587 | orchestrator | 2025-09-19 16:44:20.632598 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-19 16:44:20.632609 | orchestrator | Friday 19 September 2025 16:44:10 +0000 (0:00:00.688) 0:00:11.161 ****** 2025-09-19 16:44:20.632620 | orchestrator | changed: [testbed-manager] 2025-09-19 16:44:20.632631 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:44:20.632641 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:44:20.632652 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:44:20.632663 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:44:20.632673 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:44:20.632684 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:44:20.632695 | orchestrator | 2025-09-19 16:44:20.632706 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-19 16:44:20.632717 | orchestrator | Friday 19 September 2025 16:44:11 +0000 (0:00:01.599) 0:00:12.760 ****** 2025-09-19 16:44:20.632755 | orchestrator | changed: [testbed-manager] 2025-09-19 16:44:20.632768 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:44:20.632778 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:44:20.632789 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:44:20.632800 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:44:20.632810 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:44:20.632839 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:44:20.632850 | orchestrator | 2025-09-19 16:44:20.632861 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-19 16:44:20.632873 | orchestrator | Friday 19 September 2025 16:44:13 +0000 (0:00:01.597) 0:00:14.358 ****** 2025-09-19 16:44:20.632883 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:44:20.632894 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:44:20.632905 | orchestrator | ok: [testbed-manager] 2025-09-19 16:44:20.632916 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:44:20.632926 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:44:20.632944 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:44:20.632955 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:44:20.632965 | orchestrator | 2025-09-19 16:44:20.632976 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-19 16:44:20.632987 | orchestrator | Friday 19 September 2025 16:44:15 +0000 (0:00:01.602) 0:00:15.961 ****** 2025-09-19 16:44:20.632998 | orchestrator | changed: [testbed-manager] 2025-09-19 16:44:20.633008 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:44:20.633019 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:44:20.633029 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:44:20.633040 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:44:20.633050 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:44:20.633061 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:44:20.633071 | orchestrator | 2025-09-19 16:44:20.633082 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-19 16:44:20.633093 | orchestrator | Friday 19 September 2025 16:44:16 +0000 (0:00:01.766) 0:00:17.727 ****** 2025-09-19 16:44:20.633103 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:44:20.633114 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:44:20.633124 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:44:20.633135 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:44:20.633145 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:44:20.633155 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:44:20.633166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:44:20.633177 | orchestrator | 2025-09-19 16:44:20.633187 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-19 16:44:20.633198 | orchestrator | 2025-09-19 16:44:20.633209 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-19 16:44:20.633220 | orchestrator | Friday 19 September 2025 16:44:17 +0000 (0:00:00.611) 0:00:18.339 ****** 2025-09-19 16:44:20.633230 | orchestrator | ok: [testbed-manager] 2025-09-19 16:44:20.633241 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:44:20.633251 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:44:20.633262 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:44:20.633273 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:44:20.633283 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:44:20.633294 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:44:20.633305 | orchestrator | 2025-09-19 16:44:20.633316 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:44:20.633328 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:44:20.633340 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633351 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633362 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633373 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633383 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633394 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:20.633405 | orchestrator | 2025-09-19 16:44:20.633416 | orchestrator | 2025-09-19 16:44:20.633426 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:44:20.633437 | orchestrator | Friday 19 September 2025 16:44:20 +0000 (0:00:03.193) 0:00:21.532 ****** 2025-09-19 16:44:20.633456 | orchestrator | =============================================================================== 2025-09-19 16:44:20.633467 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.84s 2025-09-19 16:44:20.633477 | orchestrator | Install python3-docker -------------------------------------------------- 3.19s 2025-09-19 16:44:20.633488 | orchestrator | Apply netplan configuration --------------------------------------------- 2.40s 2025-09-19 16:44:20.633499 | orchestrator | Apply netplan configuration --------------------------------------------- 1.84s 2025-09-19 16:44:20.633510 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.77s 2025-09-19 16:44:20.633520 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.60s 2025-09-19 16:44:20.633531 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2025-09-19 16:44:20.633542 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.60s 2025-09-19 16:44:20.633557 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2025-09-19 16:44:20.633568 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.76s 2025-09-19 16:44:20.633579 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-09-19 16:44:20.633596 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-09-19 16:44:21.205215 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-19 16:44:33.234511 | orchestrator | 2025-09-19 16:44:33 | INFO  | Task 42596d58-6e9a-44bb-a7c0-faea5f6fed4f (reboot) was prepared for execution. 2025-09-19 16:44:33.234606 | orchestrator | 2025-09-19 16:44:33 | INFO  | It takes a moment until task 42596d58-6e9a-44bb-a7c0-faea5f6fed4f (reboot) has been started and output is visible here. 2025-09-19 16:44:43.353683 | orchestrator | 2025-09-19 16:44:43.353863 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.353882 | orchestrator | 2025-09-19 16:44:43.353894 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.353906 | orchestrator | Friday 19 September 2025 16:44:37 +0000 (0:00:00.209) 0:00:00.209 ****** 2025-09-19 16:44:43.353917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:44:43.353929 | orchestrator | 2025-09-19 16:44:43.353940 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.353951 | orchestrator | Friday 19 September 2025 16:44:37 +0000 (0:00:00.115) 0:00:00.325 ****** 2025-09-19 16:44:43.353962 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:44:43.353973 | orchestrator | 2025-09-19 16:44:43.353984 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.353995 | orchestrator | Friday 19 September 2025 16:44:38 +0000 (0:00:00.955) 0:00:01.280 ****** 2025-09-19 16:44:43.354006 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:44:43.354078 | orchestrator | 2025-09-19 16:44:43.354092 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.354103 | orchestrator | 2025-09-19 16:44:43.354114 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.354125 | orchestrator | Friday 19 September 2025 16:44:38 +0000 (0:00:00.130) 0:00:01.411 ****** 2025-09-19 16:44:43.354136 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:44:43.354147 | orchestrator | 2025-09-19 16:44:43.354158 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.354169 | orchestrator | Friday 19 September 2025 16:44:38 +0000 (0:00:00.107) 0:00:01.519 ****** 2025-09-19 16:44:43.354180 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:44:43.354191 | orchestrator | 2025-09-19 16:44:43.354202 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.354213 | orchestrator | Friday 19 September 2025 16:44:39 +0000 (0:00:00.744) 0:00:02.263 ****** 2025-09-19 16:44:43.354226 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:44:43.354260 | orchestrator | 2025-09-19 16:44:43.354273 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.354285 | orchestrator | 2025-09-19 16:44:43.354298 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.354310 | orchestrator | Friday 19 September 2025 16:44:39 +0000 (0:00:00.134) 0:00:02.398 ****** 2025-09-19 16:44:43.354322 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:44:43.354334 | orchestrator | 2025-09-19 16:44:43.354347 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.354359 | orchestrator | Friday 19 September 2025 16:44:39 +0000 (0:00:00.199) 0:00:02.597 ****** 2025-09-19 16:44:43.354371 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:44:43.354382 | orchestrator | 2025-09-19 16:44:43.354395 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.354412 | orchestrator | Friday 19 September 2025 16:44:40 +0000 (0:00:00.728) 0:00:03.326 ****** 2025-09-19 16:44:43.354430 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:44:43.354448 | orchestrator | 2025-09-19 16:44:43.354466 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.354484 | orchestrator | 2025-09-19 16:44:43.354501 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.354520 | orchestrator | Friday 19 September 2025 16:44:40 +0000 (0:00:00.138) 0:00:03.464 ****** 2025-09-19 16:44:43.354540 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:44:43.354560 | orchestrator | 2025-09-19 16:44:43.354579 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.354592 | orchestrator | Friday 19 September 2025 16:44:40 +0000 (0:00:00.112) 0:00:03.577 ****** 2025-09-19 16:44:43.354603 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:44:43.354614 | orchestrator | 2025-09-19 16:44:43.354625 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.354635 | orchestrator | Friday 19 September 2025 16:44:41 +0000 (0:00:00.678) 0:00:04.255 ****** 2025-09-19 16:44:43.354646 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:44:43.354657 | orchestrator | 2025-09-19 16:44:43.354667 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.354678 | orchestrator | 2025-09-19 16:44:43.354689 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.354700 | orchestrator | Friday 19 September 2025 16:44:41 +0000 (0:00:00.118) 0:00:04.374 ****** 2025-09-19 16:44:43.354710 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:44:43.354721 | orchestrator | 2025-09-19 16:44:43.354758 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.354771 | orchestrator | Friday 19 September 2025 16:44:41 +0000 (0:00:00.098) 0:00:04.472 ****** 2025-09-19 16:44:43.354781 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:44:43.354792 | orchestrator | 2025-09-19 16:44:43.354802 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.354813 | orchestrator | Friday 19 September 2025 16:44:42 +0000 (0:00:00.664) 0:00:05.137 ****** 2025-09-19 16:44:43.354824 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:44:43.354834 | orchestrator | 2025-09-19 16:44:43.354845 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-19 16:44:43.354856 | orchestrator | 2025-09-19 16:44:43.354866 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-19 16:44:43.354877 | orchestrator | Friday 19 September 2025 16:44:42 +0000 (0:00:00.119) 0:00:05.257 ****** 2025-09-19 16:44:43.354888 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:44:43.354898 | orchestrator | 2025-09-19 16:44:43.354909 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-19 16:44:43.354920 | orchestrator | Friday 19 September 2025 16:44:42 +0000 (0:00:00.114) 0:00:05.372 ****** 2025-09-19 16:44:43.354930 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:44:43.354941 | orchestrator | 2025-09-19 16:44:43.354951 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-19 16:44:43.354973 | orchestrator | Friday 19 September 2025 16:44:43 +0000 (0:00:00.685) 0:00:06.058 ****** 2025-09-19 16:44:43.355004 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:44:43.355016 | orchestrator | 2025-09-19 16:44:43.355026 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:44:43.355039 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355051 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355079 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355091 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355102 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355112 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:44:43.355123 | orchestrator | 2025-09-19 16:44:43.355134 | orchestrator | 2025-09-19 16:44:43.355145 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:44:43.355156 | orchestrator | Friday 19 September 2025 16:44:43 +0000 (0:00:00.032) 0:00:06.090 ****** 2025-09-19 16:44:43.355167 | orchestrator | =============================================================================== 2025-09-19 16:44:43.355178 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.46s 2025-09-19 16:44:43.355193 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.75s 2025-09-19 16:44:43.355204 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2025-09-19 16:44:43.650237 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-19 16:44:55.659143 | orchestrator | 2025-09-19 16:44:55 | INFO  | Task 5fc32e60-4b6a-412c-b582-69c96964a170 (wait-for-connection) was prepared for execution. 2025-09-19 16:44:55.659247 | orchestrator | 2025-09-19 16:44:55 | INFO  | It takes a moment until task 5fc32e60-4b6a-412c-b582-69c96964a170 (wait-for-connection) has been started and output is visible here. 2025-09-19 16:45:11.273997 | orchestrator | 2025-09-19 16:45:11.274172 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-19 16:45:11.274192 | orchestrator | 2025-09-19 16:45:11.274204 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-19 16:45:11.274216 | orchestrator | Friday 19 September 2025 16:44:59 +0000 (0:00:00.214) 0:00:00.214 ****** 2025-09-19 16:45:11.274227 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:45:11.274239 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:45:11.274250 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:45:11.274261 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:45:11.274271 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:45:11.274282 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:45:11.274293 | orchestrator | 2025-09-19 16:45:11.274304 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:45:11.274316 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274328 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274340 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274378 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274390 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274401 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:11.274411 | orchestrator | 2025-09-19 16:45:11.274422 | orchestrator | 2025-09-19 16:45:11.274433 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:45:11.274458 | orchestrator | Friday 19 September 2025 16:45:11 +0000 (0:00:11.480) 0:00:11.695 ****** 2025-09-19 16:45:11.274469 | orchestrator | =============================================================================== 2025-09-19 16:45:11.274480 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.48s 2025-09-19 16:45:11.543579 | orchestrator | + osism apply hddtemp 2025-09-19 16:45:23.475280 | orchestrator | 2025-09-19 16:45:23 | INFO  | Task 2f43da2b-1747-4562-b21d-3dfbcfe6b6cb (hddtemp) was prepared for execution. 2025-09-19 16:45:23.475394 | orchestrator | 2025-09-19 16:45:23 | INFO  | It takes a moment until task 2f43da2b-1747-4562-b21d-3dfbcfe6b6cb (hddtemp) has been started and output is visible here. 2025-09-19 16:45:51.729807 | orchestrator | 2025-09-19 16:45:51.729913 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-19 16:45:51.729924 | orchestrator | 2025-09-19 16:45:51.729932 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-19 16:45:51.729939 | orchestrator | Friday 19 September 2025 16:45:27 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-19 16:45:51.729946 | orchestrator | ok: [testbed-manager] 2025-09-19 16:45:51.729955 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:45:51.729962 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:45:51.729968 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:45:51.729975 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:45:51.729982 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:45:51.729988 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:45:51.729995 | orchestrator | 2025-09-19 16:45:51.730002 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-19 16:45:51.730008 | orchestrator | Friday 19 September 2025 16:45:28 +0000 (0:00:00.730) 0:00:01.008 ****** 2025-09-19 16:45:51.730071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:45:51.730090 | orchestrator | 2025-09-19 16:45:51.730102 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-19 16:45:51.730111 | orchestrator | Friday 19 September 2025 16:45:29 +0000 (0:00:01.169) 0:00:02.178 ****** 2025-09-19 16:45:51.730118 | orchestrator | ok: [testbed-manager] 2025-09-19 16:45:51.730124 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:45:51.730131 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:45:51.730138 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:45:51.730144 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:45:51.730151 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:45:51.730157 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:45:51.730164 | orchestrator | 2025-09-19 16:45:51.730170 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-19 16:45:51.730177 | orchestrator | Friday 19 September 2025 16:45:31 +0000 (0:00:02.096) 0:00:04.275 ****** 2025-09-19 16:45:51.730184 | orchestrator | changed: [testbed-manager] 2025-09-19 16:45:51.730191 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:45:51.730198 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:45:51.730205 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:45:51.730211 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:45:51.730236 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:45:51.730243 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:45:51.730249 | orchestrator | 2025-09-19 16:45:51.730256 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-19 16:45:51.730263 | orchestrator | Friday 19 September 2025 16:45:32 +0000 (0:00:01.122) 0:00:05.397 ****** 2025-09-19 16:45:51.730269 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:45:51.730276 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:45:51.730282 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:45:51.730289 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:45:51.730295 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:45:51.730302 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:45:51.730309 | orchestrator | ok: [testbed-manager] 2025-09-19 16:45:51.730317 | orchestrator | 2025-09-19 16:45:51.730324 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-19 16:45:51.730332 | orchestrator | Friday 19 September 2025 16:45:33 +0000 (0:00:01.101) 0:00:06.498 ****** 2025-09-19 16:45:51.730340 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:45:51.730347 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:45:51.730355 | orchestrator | changed: [testbed-manager] 2025-09-19 16:45:51.730362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:45:51.730370 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:45:51.730377 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:45:51.730384 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:45:51.730392 | orchestrator | 2025-09-19 16:45:51.730399 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-19 16:45:51.730407 | orchestrator | Friday 19 September 2025 16:45:34 +0000 (0:00:00.829) 0:00:07.328 ****** 2025-09-19 16:45:51.730414 | orchestrator | changed: [testbed-manager] 2025-09-19 16:45:51.730421 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:45:51.730429 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:45:51.730436 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:45:51.730443 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:45:51.730451 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:45:51.730458 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:45:51.730465 | orchestrator | 2025-09-19 16:45:51.730473 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-19 16:45:51.730480 | orchestrator | Friday 19 September 2025 16:45:48 +0000 (0:00:13.566) 0:00:20.894 ****** 2025-09-19 16:45:51.730488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:45:51.730495 | orchestrator | 2025-09-19 16:45:51.730503 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-19 16:45:51.730510 | orchestrator | Friday 19 September 2025 16:45:49 +0000 (0:00:01.407) 0:00:22.302 ****** 2025-09-19 16:45:51.730530 | orchestrator | changed: [testbed-manager] 2025-09-19 16:45:51.730538 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:45:51.730545 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:45:51.730552 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:45:51.730559 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:45:51.730567 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:45:51.730574 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:45:51.730581 | orchestrator | 2025-09-19 16:45:51.730589 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:45:51.730597 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:45:51.730619 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730628 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730641 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730648 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730656 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730663 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:45:51.730671 | orchestrator | 2025-09-19 16:45:51.730678 | orchestrator | 2025-09-19 16:45:51.730686 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:45:51.730692 | orchestrator | Friday 19 September 2025 16:45:51 +0000 (0:00:01.837) 0:00:24.140 ****** 2025-09-19 16:45:51.730699 | orchestrator | =============================================================================== 2025-09-19 16:45:51.730706 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.57s 2025-09-19 16:45:51.730712 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.10s 2025-09-19 16:45:51.730718 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.84s 2025-09-19 16:45:51.730725 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.41s 2025-09-19 16:45:51.730731 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2025-09-19 16:45:51.730738 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.12s 2025-09-19 16:45:51.730767 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.10s 2025-09-19 16:45:51.730774 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.83s 2025-09-19 16:45:51.730781 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2025-09-19 16:45:52.005502 | orchestrator | ++ semver latest 7.1.1 2025-09-19 16:45:52.067298 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 16:45:52.067387 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 16:45:52.067402 | orchestrator | + sudo systemctl restart manager.service 2025-09-19 16:46:05.605127 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 16:46:05.605309 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-19 16:46:05.605325 | orchestrator | + local max_attempts=60 2025-09-19 16:46:05.605336 | orchestrator | + local name=ceph-ansible 2025-09-19 16:46:05.605346 | orchestrator | + local attempt_num=1 2025-09-19 16:46:05.605366 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:05.633625 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:05.633671 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:05.633681 | orchestrator | + sleep 5 2025-09-19 16:46:10.637066 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:10.667621 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:10.667687 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:10.667700 | orchestrator | + sleep 5 2025-09-19 16:46:15.670992 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:15.704160 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:15.704251 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:15.704266 | orchestrator | + sleep 5 2025-09-19 16:46:20.708821 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:20.750595 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:20.750674 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:20.750687 | orchestrator | + sleep 5 2025-09-19 16:46:25.756130 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:25.794222 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:25.794327 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:25.794367 | orchestrator | + sleep 5 2025-09-19 16:46:30.799559 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:30.840290 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:30.840372 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:30.840393 | orchestrator | + sleep 5 2025-09-19 16:46:35.845718 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:35.884636 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:35.884716 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:35.884724 | orchestrator | + sleep 5 2025-09-19 16:46:40.889843 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:40.942627 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:40.942684 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:40.942689 | orchestrator | + sleep 5 2025-09-19 16:46:45.944972 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:45.971870 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:45.971887 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:45.972000 | orchestrator | + sleep 5 2025-09-19 16:46:50.975344 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:51.012370 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:51.012395 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:51.012402 | orchestrator | + sleep 5 2025-09-19 16:46:56.016858 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:46:56.053729 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:46:56.053884 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:46:56.053902 | orchestrator | + sleep 5 2025-09-19 16:47:01.059533 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:47:01.105012 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:47:01.105089 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:47:01.105098 | orchestrator | + sleep 5 2025-09-19 16:47:06.109166 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:47:06.148217 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-19 16:47:06.148310 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-19 16:47:06.148327 | orchestrator | + sleep 5 2025-09-19 16:47:11.153262 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-19 16:47:11.191036 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:47:11.191103 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-19 16:47:11.191117 | orchestrator | + local max_attempts=60 2025-09-19 16:47:11.191129 | orchestrator | + local name=kolla-ansible 2025-09-19 16:47:11.191141 | orchestrator | + local attempt_num=1 2025-09-19 16:47:11.192244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-19 16:47:11.226363 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:47:11.226516 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-19 16:47:11.226530 | orchestrator | + local max_attempts=60 2025-09-19 16:47:11.226542 | orchestrator | + local name=osism-ansible 2025-09-19 16:47:11.226553 | orchestrator | + local attempt_num=1 2025-09-19 16:47:11.226572 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-19 16:47:11.263921 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-19 16:47:11.263956 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-19 16:47:11.263967 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-19 16:47:11.465519 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-19 16:47:11.632446 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-19 16:47:11.787844 | orchestrator | ARA in osism-ansible already disabled. 2025-09-19 16:47:11.927105 | orchestrator | ARA in osism-kubernetes already disabled. 2025-09-19 16:47:11.927312 | orchestrator | + osism apply gather-facts 2025-09-19 16:47:24.069530 | orchestrator | 2025-09-19 16:47:24 | INFO  | Task 4f946009-c757-4fea-8a77-4530db487d2f (gather-facts) was prepared for execution. 2025-09-19 16:47:24.069664 | orchestrator | 2025-09-19 16:47:24 | INFO  | It takes a moment until task 4f946009-c757-4fea-8a77-4530db487d2f (gather-facts) has been started and output is visible here. 2025-09-19 16:47:36.799883 | orchestrator | 2025-09-19 16:47:36.799992 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 16:47:36.800031 | orchestrator | 2025-09-19 16:47:36.800042 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:47:36.800051 | orchestrator | Friday 19 September 2025 16:47:27 +0000 (0:00:00.198) 0:00:00.198 ****** 2025-09-19 16:47:36.800061 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:47:36.800072 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:47:36.800081 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:47:36.800091 | orchestrator | ok: [testbed-manager] 2025-09-19 16:47:36.800100 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:47:36.800110 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:47:36.800119 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:47:36.800129 | orchestrator | 2025-09-19 16:47:36.800138 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 16:47:36.800148 | orchestrator | 2025-09-19 16:47:36.800157 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 16:47:36.800167 | orchestrator | Friday 19 September 2025 16:47:35 +0000 (0:00:08.246) 0:00:08.444 ****** 2025-09-19 16:47:36.800177 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:47:36.800187 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:47:36.800197 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:47:36.800206 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:47:36.800216 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:47:36.800225 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:47:36.800234 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:47:36.800244 | orchestrator | 2025-09-19 16:47:36.800253 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:47:36.800263 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800274 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800284 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800293 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800303 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800312 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800322 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:47:36.800332 | orchestrator | 2025-09-19 16:47:36.800341 | orchestrator | 2025-09-19 16:47:36.800351 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:47:36.800361 | orchestrator | Friday 19 September 2025 16:47:36 +0000 (0:00:00.530) 0:00:08.975 ****** 2025-09-19 16:47:36.800384 | orchestrator | =============================================================================== 2025-09-19 16:47:36.800394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.25s 2025-09-19 16:47:36.800406 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-19 16:47:37.124312 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-19 16:47:37.137288 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-19 16:47:37.157076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-19 16:47:37.170807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-19 16:47:37.183515 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-19 16:47:37.195355 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-19 16:47:37.207053 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-19 16:47:37.225804 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-19 16:47:37.237537 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-19 16:47:37.247799 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-19 16:47:37.259057 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-19 16:47:37.276638 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-19 16:47:37.294284 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-19 16:47:37.308446 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-19 16:47:37.322625 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-19 16:47:37.333927 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-19 16:47:37.345405 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-19 16:47:37.356601 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-19 16:47:37.368106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-19 16:47:37.380088 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-19 16:47:37.393732 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-19 16:47:37.725332 | orchestrator | ok: Runtime: 0:22:56.001687 2025-09-19 16:47:37.816220 | 2025-09-19 16:47:37.816351 | TASK [Deploy services] 2025-09-19 16:47:38.349460 | orchestrator | skipping: Conditional result was False 2025-09-19 16:47:38.370751 | 2025-09-19 16:47:38.372001 | TASK [Deploy in a nutshell] 2025-09-19 16:47:39.068730 | orchestrator | + set -e 2025-09-19 16:47:39.068951 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 16:47:39.068976 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 16:47:39.068996 | orchestrator | ++ INTERACTIVE=false 2025-09-19 16:47:39.069010 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 16:47:39.069022 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 16:47:39.069036 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 16:47:39.069083 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 16:47:39.069111 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 16:47:39.069125 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 16:47:39.069141 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 16:47:39.069153 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 16:47:39.069171 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 16:47:39.069182 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 16:47:39.069203 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 16:47:39.069214 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 16:47:39.069228 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 16:47:39.069239 | orchestrator | ++ export ARA=false 2025-09-19 16:47:39.069250 | orchestrator | ++ ARA=false 2025-09-19 16:47:39.069261 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 16:47:39.069273 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 16:47:39.069299 | orchestrator | ++ export TEMPEST=false 2025-09-19 16:47:39.069310 | orchestrator | ++ TEMPEST=false 2025-09-19 16:47:39.069321 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 16:47:39.069332 | orchestrator | ++ IS_ZUUL=true 2025-09-19 16:47:39.069343 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:47:39.069355 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 16:47:39.069365 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 16:47:39.069376 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 16:47:39.069386 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 16:47:39.069397 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 16:47:39.069408 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 16:47:39.069419 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 16:47:39.069430 | orchestrator | 2025-09-19 16:47:39.069441 | orchestrator | # PULL IMAGES 2025-09-19 16:47:39.069452 | orchestrator | 2025-09-19 16:47:39.069463 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 16:47:39.069481 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 16:47:39.069493 | orchestrator | + echo 2025-09-19 16:47:39.069504 | orchestrator | + echo '# PULL IMAGES' 2025-09-19 16:47:39.069515 | orchestrator | + echo 2025-09-19 16:47:39.070285 | orchestrator | ++ semver latest 7.0.0 2025-09-19 16:47:39.129115 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-19 16:47:39.129202 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 16:47:39.129214 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-19 16:47:41.014659 | orchestrator | 2025-09-19 16:47:41 | INFO  | Trying to run play pull-images in environment custom 2025-09-19 16:47:51.143218 | orchestrator | 2025-09-19 16:47:51 | INFO  | Task fc5b43a0-1e73-49a3-bdd6-82b1b06ea0f9 (pull-images) was prepared for execution. 2025-09-19 16:47:51.143340 | orchestrator | 2025-09-19 16:47:51 | INFO  | Task fc5b43a0-1e73-49a3-bdd6-82b1b06ea0f9 is running in background. No more output. Check ARA for logs. 2025-09-19 16:47:53.413198 | orchestrator | 2025-09-19 16:47:53 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-19 16:48:03.485362 | orchestrator | 2025-09-19 16:48:03 | INFO  | Task 3ca4beef-74d6-436f-8fff-f30f8ae6e84d (wipe-partitions) was prepared for execution. 2025-09-19 16:48:03.485470 | orchestrator | 2025-09-19 16:48:03 | INFO  | It takes a moment until task 3ca4beef-74d6-436f-8fff-f30f8ae6e84d (wipe-partitions) has been started and output is visible here. 2025-09-19 16:48:15.336681 | orchestrator | 2025-09-19 16:48:15.336852 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-19 16:48:15.336883 | orchestrator | 2025-09-19 16:48:15.336903 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-19 16:48:15.336923 | orchestrator | Friday 19 September 2025 16:48:07 +0000 (0:00:00.135) 0:00:00.135 ****** 2025-09-19 16:48:15.336937 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:48:15.336949 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:48:15.336960 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:48:15.336971 | orchestrator | 2025-09-19 16:48:15.336983 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-19 16:48:15.337019 | orchestrator | Friday 19 September 2025 16:48:07 +0000 (0:00:00.549) 0:00:00.685 ****** 2025-09-19 16:48:15.337031 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:15.337042 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:48:15.337057 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:48:15.337068 | orchestrator | 2025-09-19 16:48:15.337079 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-19 16:48:15.337090 | orchestrator | Friday 19 September 2025 16:48:08 +0000 (0:00:00.219) 0:00:00.904 ****** 2025-09-19 16:48:15.337101 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:48:15.337112 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:48:15.337123 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:48:15.337133 | orchestrator | 2025-09-19 16:48:15.337145 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-19 16:48:15.337156 | orchestrator | Friday 19 September 2025 16:48:08 +0000 (0:00:00.672) 0:00:01.577 ****** 2025-09-19 16:48:15.337166 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:15.337177 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:48:15.337188 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:48:15.337198 | orchestrator | 2025-09-19 16:48:15.337209 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-19 16:48:15.337222 | orchestrator | Friday 19 September 2025 16:48:08 +0000 (0:00:00.227) 0:00:01.805 ****** 2025-09-19 16:48:15.337235 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 16:48:15.337251 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 16:48:15.337263 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 16:48:15.337275 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 16:48:15.337287 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 16:48:15.337299 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 16:48:15.337311 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 16:48:15.337323 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 16:48:15.337335 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 16:48:15.337347 | orchestrator | 2025-09-19 16:48:15.337359 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-19 16:48:15.337372 | orchestrator | Friday 19 September 2025 16:48:10 +0000 (0:00:01.252) 0:00:03.057 ****** 2025-09-19 16:48:15.337384 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 16:48:15.337397 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 16:48:15.337408 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 16:48:15.337420 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 16:48:15.337432 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 16:48:15.337444 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 16:48:15.337456 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 16:48:15.337468 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 16:48:15.337480 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 16:48:15.337492 | orchestrator | 2025-09-19 16:48:15.337503 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-19 16:48:15.337515 | orchestrator | Friday 19 September 2025 16:48:11 +0000 (0:00:01.409) 0:00:04.466 ****** 2025-09-19 16:48:15.337527 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-19 16:48:15.337540 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-19 16:48:15.337552 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-19 16:48:15.337564 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-19 16:48:15.337576 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-19 16:48:15.337593 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-19 16:48:15.337604 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-19 16:48:15.337623 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-19 16:48:15.337634 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-19 16:48:15.337645 | orchestrator | 2025-09-19 16:48:15.337656 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-19 16:48:15.337666 | orchestrator | Friday 19 September 2025 16:48:13 +0000 (0:00:02.214) 0:00:06.681 ****** 2025-09-19 16:48:15.337677 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:48:15.337688 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:48:15.337698 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:48:15.337709 | orchestrator | 2025-09-19 16:48:15.337720 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-19 16:48:15.337731 | orchestrator | Friday 19 September 2025 16:48:14 +0000 (0:00:00.573) 0:00:07.255 ****** 2025-09-19 16:48:15.337741 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:48:15.337752 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:48:15.337762 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:48:15.337793 | orchestrator | 2025-09-19 16:48:15.337804 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:48:15.337817 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:15.337830 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:15.337859 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:15.337870 | orchestrator | 2025-09-19 16:48:15.337882 | orchestrator | 2025-09-19 16:48:15.337893 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:48:15.337904 | orchestrator | Friday 19 September 2025 16:48:15 +0000 (0:00:00.622) 0:00:07.877 ****** 2025-09-19 16:48:15.337914 | orchestrator | =============================================================================== 2025-09-19 16:48:15.337925 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.21s 2025-09-19 16:48:15.337936 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.41s 2025-09-19 16:48:15.337947 | orchestrator | Check device availability ----------------------------------------------- 1.25s 2025-09-19 16:48:15.337957 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.67s 2025-09-19 16:48:15.337968 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2025-09-19 16:48:15.337979 | orchestrator | Reload udev rules ------------------------------------------------------- 0.57s 2025-09-19 16:48:15.337989 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.55s 2025-09-19 16:48:15.338000 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.23s 2025-09-19 16:48:15.338011 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-09-19 16:48:27.507647 | orchestrator | 2025-09-19 16:48:27 | INFO  | Task a3ba3cc6-4ea1-42df-adc2-c870cfc71a75 (facts) was prepared for execution. 2025-09-19 16:48:27.507756 | orchestrator | 2025-09-19 16:48:27 | INFO  | It takes a moment until task a3ba3cc6-4ea1-42df-adc2-c870cfc71a75 (facts) has been started and output is visible here. 2025-09-19 16:48:41.131334 | orchestrator | 2025-09-19 16:48:41.131431 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 16:48:41.131442 | orchestrator | 2025-09-19 16:48:41.131450 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 16:48:41.131458 | orchestrator | Friday 19 September 2025 16:48:31 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-19 16:48:41.131465 | orchestrator | ok: [testbed-manager] 2025-09-19 16:48:41.131473 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:48:41.131480 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:48:41.131505 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:48:41.131512 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:48:41.131519 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:48:41.131525 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:48:41.131532 | orchestrator | 2025-09-19 16:48:41.131540 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 16:48:41.131547 | orchestrator | Friday 19 September 2025 16:48:32 +0000 (0:00:01.118) 0:00:01.388 ****** 2025-09-19 16:48:41.131553 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:48:41.131561 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:48:41.131568 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:48:41.131574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:48:41.131581 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:41.131587 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:48:41.131594 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:48:41.131601 | orchestrator | 2025-09-19 16:48:41.131607 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 16:48:41.131614 | orchestrator | 2025-09-19 16:48:41.131620 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:48:41.131627 | orchestrator | Friday 19 September 2025 16:48:33 +0000 (0:00:01.210) 0:00:02.598 ****** 2025-09-19 16:48:41.131634 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:48:41.131640 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:48:41.131647 | orchestrator | ok: [testbed-manager] 2025-09-19 16:48:41.131654 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:48:41.131661 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:48:41.131667 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:48:41.131674 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:48:41.131680 | orchestrator | 2025-09-19 16:48:41.131687 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 16:48:41.131693 | orchestrator | 2025-09-19 16:48:41.131700 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 16:48:41.131721 | orchestrator | Friday 19 September 2025 16:48:40 +0000 (0:00:06.446) 0:00:09.045 ****** 2025-09-19 16:48:41.131728 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:48:41.131734 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:48:41.131757 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:48:41.131763 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:48:41.131770 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:41.131809 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:48:41.131816 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:48:41.131822 | orchestrator | 2025-09-19 16:48:41.131829 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:48:41.131836 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131844 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131850 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131857 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131864 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131871 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131877 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:48:41.131884 | orchestrator | 2025-09-19 16:48:41.131897 | orchestrator | 2025-09-19 16:48:41.131905 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:48:41.131913 | orchestrator | Friday 19 September 2025 16:48:40 +0000 (0:00:00.588) 0:00:09.633 ****** 2025-09-19 16:48:41.131921 | orchestrator | =============================================================================== 2025-09-19 16:48:41.131928 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.45s 2025-09-19 16:48:41.131936 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.21s 2025-09-19 16:48:41.131943 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2025-09-19 16:48:41.131951 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-09-19 16:48:43.108021 | orchestrator | 2025-09-19 16:48:43 | INFO  | Task cfd05dae-cf37-4faf-8e6b-8865e7ab3e4e (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-19 16:48:43.108117 | orchestrator | 2025-09-19 16:48:43 | INFO  | It takes a moment until task cfd05dae-cf37-4faf-8e6b-8865e7ab3e4e (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-19 16:48:53.639672 | orchestrator | 2025-09-19 16:48:53.639849 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 16:48:53.639868 | orchestrator | 2025-09-19 16:48:53.639881 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:48:53.639896 | orchestrator | Friday 19 September 2025 16:48:46 +0000 (0:00:00.296) 0:00:00.296 ****** 2025-09-19 16:48:53.639908 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 16:48:53.639919 | orchestrator | 2025-09-19 16:48:53.639931 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:48:53.639942 | orchestrator | Friday 19 September 2025 16:48:47 +0000 (0:00:00.249) 0:00:00.545 ****** 2025-09-19 16:48:53.639953 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:48:53.639965 | orchestrator | 2025-09-19 16:48:53.639976 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.639987 | orchestrator | Friday 19 September 2025 16:48:47 +0000 (0:00:00.210) 0:00:00.755 ****** 2025-09-19 16:48:53.639998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 16:48:53.640010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 16:48:53.640021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 16:48:53.640032 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 16:48:53.640043 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 16:48:53.640053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 16:48:53.640064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 16:48:53.640075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 16:48:53.640086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 16:48:53.640097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 16:48:53.640108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 16:48:53.640127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 16:48:53.640139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 16:48:53.640150 | orchestrator | 2025-09-19 16:48:53.640161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640172 | orchestrator | Friday 19 September 2025 16:48:47 +0000 (0:00:00.332) 0:00:01.088 ****** 2025-09-19 16:48:53.640183 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640218 | orchestrator | 2025-09-19 16:48:53.640231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640244 | orchestrator | Friday 19 September 2025 16:48:48 +0000 (0:00:00.383) 0:00:01.472 ****** 2025-09-19 16:48:53.640256 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640268 | orchestrator | 2025-09-19 16:48:53.640280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640292 | orchestrator | Friday 19 September 2025 16:48:48 +0000 (0:00:00.185) 0:00:01.657 ****** 2025-09-19 16:48:53.640304 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640316 | orchestrator | 2025-09-19 16:48:53.640329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640341 | orchestrator | Friday 19 September 2025 16:48:48 +0000 (0:00:00.183) 0:00:01.841 ****** 2025-09-19 16:48:53.640353 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640369 | orchestrator | 2025-09-19 16:48:53.640381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640394 | orchestrator | Friday 19 September 2025 16:48:48 +0000 (0:00:00.184) 0:00:02.026 ****** 2025-09-19 16:48:53.640405 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640417 | orchestrator | 2025-09-19 16:48:53.640430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640443 | orchestrator | Friday 19 September 2025 16:48:48 +0000 (0:00:00.190) 0:00:02.217 ****** 2025-09-19 16:48:53.640456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640468 | orchestrator | 2025-09-19 16:48:53.640480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640492 | orchestrator | Friday 19 September 2025 16:48:49 +0000 (0:00:00.176) 0:00:02.393 ****** 2025-09-19 16:48:53.640504 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640516 | orchestrator | 2025-09-19 16:48:53.640528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640541 | orchestrator | Friday 19 September 2025 16:48:49 +0000 (0:00:00.187) 0:00:02.580 ****** 2025-09-19 16:48:53.640552 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.640563 | orchestrator | 2025-09-19 16:48:53.640574 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640584 | orchestrator | Friday 19 September 2025 16:48:49 +0000 (0:00:00.186) 0:00:02.766 ****** 2025-09-19 16:48:53.640595 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989) 2025-09-19 16:48:53.640607 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989) 2025-09-19 16:48:53.640618 | orchestrator | 2025-09-19 16:48:53.640629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640639 | orchestrator | Friday 19 September 2025 16:48:49 +0000 (0:00:00.378) 0:00:03.145 ****** 2025-09-19 16:48:53.640668 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd) 2025-09-19 16:48:53.640679 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd) 2025-09-19 16:48:53.640690 | orchestrator | 2025-09-19 16:48:53.640701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640712 | orchestrator | Friday 19 September 2025 16:48:50 +0000 (0:00:00.390) 0:00:03.535 ****** 2025-09-19 16:48:53.640723 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363) 2025-09-19 16:48:53.640733 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363) 2025-09-19 16:48:53.640744 | orchestrator | 2025-09-19 16:48:53.640755 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640765 | orchestrator | Friday 19 September 2025 16:48:50 +0000 (0:00:00.524) 0:00:04.060 ****** 2025-09-19 16:48:53.640776 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5) 2025-09-19 16:48:53.640815 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5) 2025-09-19 16:48:53.640827 | orchestrator | 2025-09-19 16:48:53.640837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:48:53.640848 | orchestrator | Friday 19 September 2025 16:48:51 +0000 (0:00:00.540) 0:00:04.600 ****** 2025-09-19 16:48:53.640859 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:48:53.640869 | orchestrator | 2025-09-19 16:48:53.640880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.640896 | orchestrator | Friday 19 September 2025 16:48:51 +0000 (0:00:00.582) 0:00:05.182 ****** 2025-09-19 16:48:53.640907 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 16:48:53.640918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 16:48:53.640928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 16:48:53.640939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 16:48:53.640949 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 16:48:53.640960 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 16:48:53.640970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 16:48:53.640981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 16:48:53.640991 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 16:48:53.641001 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 16:48:53.641012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 16:48:53.641022 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 16:48:53.641033 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 16:48:53.641043 | orchestrator | 2025-09-19 16:48:53.641054 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641065 | orchestrator | Friday 19 September 2025 16:48:52 +0000 (0:00:00.353) 0:00:05.535 ****** 2025-09-19 16:48:53.641075 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641086 | orchestrator | 2025-09-19 16:48:53.641097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641107 | orchestrator | Friday 19 September 2025 16:48:52 +0000 (0:00:00.190) 0:00:05.726 ****** 2025-09-19 16:48:53.641118 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641128 | orchestrator | 2025-09-19 16:48:53.641139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641150 | orchestrator | Friday 19 September 2025 16:48:52 +0000 (0:00:00.196) 0:00:05.922 ****** 2025-09-19 16:48:53.641160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641171 | orchestrator | 2025-09-19 16:48:53.641181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641192 | orchestrator | Friday 19 September 2025 16:48:52 +0000 (0:00:00.196) 0:00:06.118 ****** 2025-09-19 16:48:53.641203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641213 | orchestrator | 2025-09-19 16:48:53.641224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641235 | orchestrator | Friday 19 September 2025 16:48:52 +0000 (0:00:00.180) 0:00:06.299 ****** 2025-09-19 16:48:53.641246 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641256 | orchestrator | 2025-09-19 16:48:53.641273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641284 | orchestrator | Friday 19 September 2025 16:48:53 +0000 (0:00:00.185) 0:00:06.484 ****** 2025-09-19 16:48:53.641294 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641304 | orchestrator | 2025-09-19 16:48:53.641315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641325 | orchestrator | Friday 19 September 2025 16:48:53 +0000 (0:00:00.182) 0:00:06.667 ****** 2025-09-19 16:48:53.641336 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:48:53.641346 | orchestrator | 2025-09-19 16:48:53.641357 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:48:53.641368 | orchestrator | Friday 19 September 2025 16:48:53 +0000 (0:00:00.182) 0:00:06.850 ****** 2025-09-19 16:48:53.641385 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275163 | orchestrator | 2025-09-19 16:49:00.275279 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:00.275296 | orchestrator | Friday 19 September 2025 16:48:53 +0000 (0:00:00.170) 0:00:07.020 ****** 2025-09-19 16:49:00.275308 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 16:49:00.275321 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 16:49:00.275332 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 16:49:00.275343 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 16:49:00.275354 | orchestrator | 2025-09-19 16:49:00.275365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:00.275376 | orchestrator | Friday 19 September 2025 16:48:54 +0000 (0:00:00.832) 0:00:07.853 ****** 2025-09-19 16:49:00.275388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275399 | orchestrator | 2025-09-19 16:49:00.275409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:00.275420 | orchestrator | Friday 19 September 2025 16:48:54 +0000 (0:00:00.175) 0:00:08.028 ****** 2025-09-19 16:49:00.275432 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275442 | orchestrator | 2025-09-19 16:49:00.275454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:00.275468 | orchestrator | Friday 19 September 2025 16:48:54 +0000 (0:00:00.183) 0:00:08.212 ****** 2025-09-19 16:49:00.275487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275505 | orchestrator | 2025-09-19 16:49:00.275523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:00.275540 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.224) 0:00:08.436 ****** 2025-09-19 16:49:00.275558 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275577 | orchestrator | 2025-09-19 16:49:00.275597 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 16:49:00.275616 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.191) 0:00:08.628 ****** 2025-09-19 16:49:00.275635 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-19 16:49:00.275649 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-19 16:49:00.275660 | orchestrator | 2025-09-19 16:49:00.275671 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 16:49:00.275682 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.157) 0:00:08.785 ****** 2025-09-19 16:49:00.275714 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275727 | orchestrator | 2025-09-19 16:49:00.275740 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 16:49:00.275752 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.124) 0:00:08.909 ****** 2025-09-19 16:49:00.275764 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275775 | orchestrator | 2025-09-19 16:49:00.275817 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 16:49:00.275829 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.120) 0:00:09.030 ****** 2025-09-19 16:49:00.275840 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.275875 | orchestrator | 2025-09-19 16:49:00.275892 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 16:49:00.275910 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.117) 0:00:09.148 ****** 2025-09-19 16:49:00.275928 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:49:00.275953 | orchestrator | 2025-09-19 16:49:00.275975 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 16:49:00.275992 | orchestrator | Friday 19 September 2025 16:48:55 +0000 (0:00:00.125) 0:00:09.273 ****** 2025-09-19 16:49:00.276012 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '502e1679-2b8a-59ad-b2cc-f53252d80a70'}}) 2025-09-19 16:49:00.276032 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '189b9442-6cba-5a76-9378-3098f039bcec'}}) 2025-09-19 16:49:00.276051 | orchestrator | 2025-09-19 16:49:00.276070 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 16:49:00.276088 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.144) 0:00:09.418 ****** 2025-09-19 16:49:00.276109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '502e1679-2b8a-59ad-b2cc-f53252d80a70'}})  2025-09-19 16:49:00.276137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '189b9442-6cba-5a76-9378-3098f039bcec'}})  2025-09-19 16:49:00.276149 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276161 | orchestrator | 2025-09-19 16:49:00.276179 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 16:49:00.276197 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.135) 0:00:09.554 ****** 2025-09-19 16:49:00.276215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '502e1679-2b8a-59ad-b2cc-f53252d80a70'}})  2025-09-19 16:49:00.276233 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '189b9442-6cba-5a76-9378-3098f039bcec'}})  2025-09-19 16:49:00.276251 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276268 | orchestrator | 2025-09-19 16:49:00.276285 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 16:49:00.276302 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.270) 0:00:09.824 ****** 2025-09-19 16:49:00.276319 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '502e1679-2b8a-59ad-b2cc-f53252d80a70'}})  2025-09-19 16:49:00.276337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '189b9442-6cba-5a76-9378-3098f039bcec'}})  2025-09-19 16:49:00.276354 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276372 | orchestrator | 2025-09-19 16:49:00.276453 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 16:49:00.276472 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.142) 0:00:09.966 ****** 2025-09-19 16:49:00.276489 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:49:00.276506 | orchestrator | 2025-09-19 16:49:00.276523 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 16:49:00.276553 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.128) 0:00:10.095 ****** 2025-09-19 16:49:00.276573 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:49:00.276592 | orchestrator | 2025-09-19 16:49:00.276611 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 16:49:00.276630 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.126) 0:00:10.222 ****** 2025-09-19 16:49:00.276648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276666 | orchestrator | 2025-09-19 16:49:00.276690 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 16:49:00.276714 | orchestrator | Friday 19 September 2025 16:48:56 +0000 (0:00:00.122) 0:00:10.344 ****** 2025-09-19 16:49:00.276733 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276752 | orchestrator | 2025-09-19 16:49:00.276835 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 16:49:00.276849 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.124) 0:00:10.469 ****** 2025-09-19 16:49:00.276860 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.276871 | orchestrator | 2025-09-19 16:49:00.276882 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 16:49:00.276893 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.121) 0:00:10.590 ****** 2025-09-19 16:49:00.276903 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:49:00.276914 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:00.276925 | orchestrator |  "sdb": { 2025-09-19 16:49:00.276936 | orchestrator |  "osd_lvm_uuid": "502e1679-2b8a-59ad-b2cc-f53252d80a70" 2025-09-19 16:49:00.276947 | orchestrator |  }, 2025-09-19 16:49:00.276958 | orchestrator |  "sdc": { 2025-09-19 16:49:00.276969 | orchestrator |  "osd_lvm_uuid": "189b9442-6cba-5a76-9378-3098f039bcec" 2025-09-19 16:49:00.276980 | orchestrator |  } 2025-09-19 16:49:00.276991 | orchestrator |  } 2025-09-19 16:49:00.277002 | orchestrator | } 2025-09-19 16:49:00.277017 | orchestrator | 2025-09-19 16:49:00.277036 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 16:49:00.277053 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.130) 0:00:10.721 ****** 2025-09-19 16:49:00.277071 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.277088 | orchestrator | 2025-09-19 16:49:00.277108 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 16:49:00.277126 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.125) 0:00:10.847 ****** 2025-09-19 16:49:00.277144 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.277155 | orchestrator | 2025-09-19 16:49:00.277166 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 16:49:00.277177 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.121) 0:00:10.969 ****** 2025-09-19 16:49:00.277187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:49:00.277198 | orchestrator | 2025-09-19 16:49:00.277209 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 16:49:00.277219 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.117) 0:00:11.086 ****** 2025-09-19 16:49:00.277230 | orchestrator | changed: [testbed-node-3] => { 2025-09-19 16:49:00.277241 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 16:49:00.277251 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:00.277262 | orchestrator |  "sdb": { 2025-09-19 16:49:00.277273 | orchestrator |  "osd_lvm_uuid": "502e1679-2b8a-59ad-b2cc-f53252d80a70" 2025-09-19 16:49:00.277284 | orchestrator |  }, 2025-09-19 16:49:00.277295 | orchestrator |  "sdc": { 2025-09-19 16:49:00.277305 | orchestrator |  "osd_lvm_uuid": "189b9442-6cba-5a76-9378-3098f039bcec" 2025-09-19 16:49:00.277316 | orchestrator |  } 2025-09-19 16:49:00.277327 | orchestrator |  }, 2025-09-19 16:49:00.277338 | orchestrator |  "lvm_volumes": [ 2025-09-19 16:49:00.277348 | orchestrator |  { 2025-09-19 16:49:00.277359 | orchestrator |  "data": "osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70", 2025-09-19 16:49:00.277371 | orchestrator |  "data_vg": "ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70" 2025-09-19 16:49:00.277381 | orchestrator |  }, 2025-09-19 16:49:00.277392 | orchestrator |  { 2025-09-19 16:49:00.277403 | orchestrator |  "data": "osd-block-189b9442-6cba-5a76-9378-3098f039bcec", 2025-09-19 16:49:00.277413 | orchestrator |  "data_vg": "ceph-189b9442-6cba-5a76-9378-3098f039bcec" 2025-09-19 16:49:00.277424 | orchestrator |  } 2025-09-19 16:49:00.277435 | orchestrator |  ] 2025-09-19 16:49:00.277446 | orchestrator |  } 2025-09-19 16:49:00.277456 | orchestrator | } 2025-09-19 16:49:00.277467 | orchestrator | 2025-09-19 16:49:00.277478 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 16:49:00.277505 | orchestrator | Friday 19 September 2025 16:48:57 +0000 (0:00:00.185) 0:00:11.272 ****** 2025-09-19 16:49:00.277516 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 16:49:00.277527 | orchestrator | 2025-09-19 16:49:00.277538 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 16:49:00.277548 | orchestrator | 2025-09-19 16:49:00.277559 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:49:00.277570 | orchestrator | Friday 19 September 2025 16:48:59 +0000 (0:00:01.909) 0:00:13.182 ****** 2025-09-19 16:49:00.277581 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 16:49:00.277591 | orchestrator | 2025-09-19 16:49:00.277602 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:49:00.277612 | orchestrator | Friday 19 September 2025 16:49:00 +0000 (0:00:00.239) 0:00:13.421 ****** 2025-09-19 16:49:00.277623 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:49:00.277634 | orchestrator | 2025-09-19 16:49:00.277644 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:00.277668 | orchestrator | Friday 19 September 2025 16:49:00 +0000 (0:00:00.234) 0:00:13.655 ****** 2025-09-19 16:49:07.060246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 16:49:07.060350 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 16:49:07.060365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 16:49:07.060377 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 16:49:07.060388 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 16:49:07.060399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 16:49:07.060410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 16:49:07.060420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 16:49:07.060431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 16:49:07.060442 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 16:49:07.060453 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 16:49:07.060464 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 16:49:07.060475 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 16:49:07.060490 | orchestrator | 2025-09-19 16:49:07.060503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060515 | orchestrator | Friday 19 September 2025 16:49:00 +0000 (0:00:00.355) 0:00:14.011 ****** 2025-09-19 16:49:07.060526 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060538 | orchestrator | 2025-09-19 16:49:07.060550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060561 | orchestrator | Friday 19 September 2025 16:49:00 +0000 (0:00:00.195) 0:00:14.207 ****** 2025-09-19 16:49:07.060572 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060583 | orchestrator | 2025-09-19 16:49:07.060594 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060605 | orchestrator | Friday 19 September 2025 16:49:01 +0000 (0:00:00.200) 0:00:14.407 ****** 2025-09-19 16:49:07.060615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060626 | orchestrator | 2025-09-19 16:49:07.060638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060649 | orchestrator | Friday 19 September 2025 16:49:01 +0000 (0:00:00.190) 0:00:14.598 ****** 2025-09-19 16:49:07.060660 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060694 | orchestrator | 2025-09-19 16:49:07.060706 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060716 | orchestrator | Friday 19 September 2025 16:49:01 +0000 (0:00:00.190) 0:00:14.789 ****** 2025-09-19 16:49:07.060727 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060738 | orchestrator | 2025-09-19 16:49:07.060749 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060759 | orchestrator | Friday 19 September 2025 16:49:01 +0000 (0:00:00.425) 0:00:15.214 ****** 2025-09-19 16:49:07.060770 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060781 | orchestrator | 2025-09-19 16:49:07.060850 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060865 | orchestrator | Friday 19 September 2025 16:49:02 +0000 (0:00:00.175) 0:00:15.390 ****** 2025-09-19 16:49:07.060877 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060889 | orchestrator | 2025-09-19 16:49:07.060919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060932 | orchestrator | Friday 19 September 2025 16:49:02 +0000 (0:00:00.204) 0:00:15.595 ****** 2025-09-19 16:49:07.060945 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.060957 | orchestrator | 2025-09-19 16:49:07.060970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.060982 | orchestrator | Friday 19 September 2025 16:49:02 +0000 (0:00:00.195) 0:00:15.790 ****** 2025-09-19 16:49:07.060995 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f) 2025-09-19 16:49:07.061009 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f) 2025-09-19 16:49:07.061021 | orchestrator | 2025-09-19 16:49:07.061034 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.061047 | orchestrator | Friday 19 September 2025 16:49:02 +0000 (0:00:00.381) 0:00:16.171 ****** 2025-09-19 16:49:07.061060 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73) 2025-09-19 16:49:07.061074 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73) 2025-09-19 16:49:07.061086 | orchestrator | 2025-09-19 16:49:07.061098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.061111 | orchestrator | Friday 19 September 2025 16:49:03 +0000 (0:00:00.407) 0:00:16.579 ****** 2025-09-19 16:49:07.061124 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd) 2025-09-19 16:49:07.061136 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd) 2025-09-19 16:49:07.061148 | orchestrator | 2025-09-19 16:49:07.061161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.061172 | orchestrator | Friday 19 September 2025 16:49:03 +0000 (0:00:00.394) 0:00:16.973 ****** 2025-09-19 16:49:07.061199 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3) 2025-09-19 16:49:07.061211 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3) 2025-09-19 16:49:07.061222 | orchestrator | 2025-09-19 16:49:07.061233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:07.061244 | orchestrator | Friday 19 September 2025 16:49:03 +0000 (0:00:00.371) 0:00:17.344 ****** 2025-09-19 16:49:07.061254 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:49:07.061265 | orchestrator | 2025-09-19 16:49:07.061276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061287 | orchestrator | Friday 19 September 2025 16:49:04 +0000 (0:00:00.293) 0:00:17.637 ****** 2025-09-19 16:49:07.061298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 16:49:07.061330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 16:49:07.061341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 16:49:07.061352 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 16:49:07.061363 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 16:49:07.061373 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 16:49:07.061384 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 16:49:07.061394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 16:49:07.061405 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 16:49:07.061416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 16:49:07.061426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 16:49:07.061437 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 16:49:07.061448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 16:49:07.061458 | orchestrator | 2025-09-19 16:49:07.061469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061480 | orchestrator | Friday 19 September 2025 16:49:04 +0000 (0:00:00.326) 0:00:17.964 ****** 2025-09-19 16:49:07.061491 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061502 | orchestrator | 2025-09-19 16:49:07.061512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061523 | orchestrator | Friday 19 September 2025 16:49:04 +0000 (0:00:00.172) 0:00:18.137 ****** 2025-09-19 16:49:07.061534 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061544 | orchestrator | 2025-09-19 16:49:07.061555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061566 | orchestrator | Friday 19 September 2025 16:49:05 +0000 (0:00:00.456) 0:00:18.594 ****** 2025-09-19 16:49:07.061583 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061594 | orchestrator | 2025-09-19 16:49:07.061605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061633 | orchestrator | Friday 19 September 2025 16:49:05 +0000 (0:00:00.176) 0:00:18.770 ****** 2025-09-19 16:49:07.061655 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061666 | orchestrator | 2025-09-19 16:49:07.061677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061688 | orchestrator | Friday 19 September 2025 16:49:05 +0000 (0:00:00.176) 0:00:18.947 ****** 2025-09-19 16:49:07.061699 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061709 | orchestrator | 2025-09-19 16:49:07.061720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061731 | orchestrator | Friday 19 September 2025 16:49:05 +0000 (0:00:00.174) 0:00:19.121 ****** 2025-09-19 16:49:07.061741 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061752 | orchestrator | 2025-09-19 16:49:07.061763 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061773 | orchestrator | Friday 19 September 2025 16:49:05 +0000 (0:00:00.184) 0:00:19.305 ****** 2025-09-19 16:49:07.061803 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061814 | orchestrator | 2025-09-19 16:49:07.061825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061836 | orchestrator | Friday 19 September 2025 16:49:06 +0000 (0:00:00.188) 0:00:19.494 ****** 2025-09-19 16:49:07.061847 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061858 | orchestrator | 2025-09-19 16:49:07.061868 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061886 | orchestrator | Friday 19 September 2025 16:49:06 +0000 (0:00:00.183) 0:00:19.678 ****** 2025-09-19 16:49:07.061897 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 16:49:07.061909 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 16:49:07.061920 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 16:49:07.061931 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 16:49:07.061941 | orchestrator | 2025-09-19 16:49:07.061952 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:07.061963 | orchestrator | Friday 19 September 2025 16:49:06 +0000 (0:00:00.574) 0:00:20.253 ****** 2025-09-19 16:49:07.061974 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:07.061984 | orchestrator | 2025-09-19 16:49:07.062002 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:12.928574 | orchestrator | Friday 19 September 2025 16:49:07 +0000 (0:00:00.187) 0:00:20.441 ****** 2025-09-19 16:49:12.928678 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.928695 | orchestrator | 2025-09-19 16:49:12.928708 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:12.928720 | orchestrator | Friday 19 September 2025 16:49:07 +0000 (0:00:00.191) 0:00:20.633 ****** 2025-09-19 16:49:12.928731 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.928742 | orchestrator | 2025-09-19 16:49:12.928754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:12.928765 | orchestrator | Friday 19 September 2025 16:49:07 +0000 (0:00:00.181) 0:00:20.814 ****** 2025-09-19 16:49:12.928776 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.928842 | orchestrator | 2025-09-19 16:49:12.928855 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 16:49:12.928866 | orchestrator | Friday 19 September 2025 16:49:07 +0000 (0:00:00.176) 0:00:20.991 ****** 2025-09-19 16:49:12.928877 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-19 16:49:12.928888 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-19 16:49:12.928899 | orchestrator | 2025-09-19 16:49:12.928910 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 16:49:12.928921 | orchestrator | Friday 19 September 2025 16:49:07 +0000 (0:00:00.270) 0:00:21.262 ****** 2025-09-19 16:49:12.928932 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.928943 | orchestrator | 2025-09-19 16:49:12.928954 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 16:49:12.928965 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.178) 0:00:21.440 ****** 2025-09-19 16:49:12.928976 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.928987 | orchestrator | 2025-09-19 16:49:12.928997 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 16:49:12.929008 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.118) 0:00:21.559 ****** 2025-09-19 16:49:12.929019 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929029 | orchestrator | 2025-09-19 16:49:12.929040 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 16:49:12.929055 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.121) 0:00:21.680 ****** 2025-09-19 16:49:12.929074 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:49:12.929090 | orchestrator | 2025-09-19 16:49:12.929101 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 16:49:12.929113 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.128) 0:00:21.808 ****** 2025-09-19 16:49:12.929126 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}}) 2025-09-19 16:49:12.929139 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}}) 2025-09-19 16:49:12.929151 | orchestrator | 2025-09-19 16:49:12.929163 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 16:49:12.929199 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.157) 0:00:21.966 ****** 2025-09-19 16:49:12.929211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}})  2025-09-19 16:49:12.929223 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}})  2025-09-19 16:49:12.929234 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929245 | orchestrator | 2025-09-19 16:49:12.929256 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 16:49:12.929267 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.147) 0:00:22.113 ****** 2025-09-19 16:49:12.929295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}})  2025-09-19 16:49:12.929307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}})  2025-09-19 16:49:12.929318 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929328 | orchestrator | 2025-09-19 16:49:12.929339 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 16:49:12.929350 | orchestrator | Friday 19 September 2025 16:49:08 +0000 (0:00:00.141) 0:00:22.254 ****** 2025-09-19 16:49:12.929361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}})  2025-09-19 16:49:12.929372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}})  2025-09-19 16:49:12.929384 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929394 | orchestrator | 2025-09-19 16:49:12.929405 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 16:49:12.929416 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.136) 0:00:22.391 ****** 2025-09-19 16:49:12.929427 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:49:12.929437 | orchestrator | 2025-09-19 16:49:12.929448 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 16:49:12.929459 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.129) 0:00:22.521 ****** 2025-09-19 16:49:12.929476 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:49:12.929494 | orchestrator | 2025-09-19 16:49:12.929515 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 16:49:12.929541 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.131) 0:00:22.652 ****** 2025-09-19 16:49:12.929559 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929576 | orchestrator | 2025-09-19 16:49:12.929617 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 16:49:12.929634 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.127) 0:00:22.780 ****** 2025-09-19 16:49:12.929650 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929669 | orchestrator | 2025-09-19 16:49:12.929686 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 16:49:12.929704 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.239) 0:00:23.019 ****** 2025-09-19 16:49:12.929723 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929742 | orchestrator | 2025-09-19 16:49:12.929760 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 16:49:12.929777 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.110) 0:00:23.130 ****** 2025-09-19 16:49:12.929813 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:49:12.929824 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:12.929835 | orchestrator |  "sdb": { 2025-09-19 16:49:12.929846 | orchestrator |  "osd_lvm_uuid": "6bee08d2-4d0c-5efd-9bb6-6357ac0256e2" 2025-09-19 16:49:12.929857 | orchestrator |  }, 2025-09-19 16:49:12.929868 | orchestrator |  "sdc": { 2025-09-19 16:49:12.929893 | orchestrator |  "osd_lvm_uuid": "c5ef3a10-bb06-5cc2-b298-3a565f19d9a7" 2025-09-19 16:49:12.929904 | orchestrator |  } 2025-09-19 16:49:12.929915 | orchestrator |  } 2025-09-19 16:49:12.929926 | orchestrator | } 2025-09-19 16:49:12.929937 | orchestrator | 2025-09-19 16:49:12.929948 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 16:49:12.929959 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.119) 0:00:23.250 ****** 2025-09-19 16:49:12.929970 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.929980 | orchestrator | 2025-09-19 16:49:12.929991 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 16:49:12.930002 | orchestrator | Friday 19 September 2025 16:49:09 +0000 (0:00:00.118) 0:00:23.368 ****** 2025-09-19 16:49:12.930013 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.930097 | orchestrator | 2025-09-19 16:49:12.930109 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 16:49:12.930120 | orchestrator | Friday 19 September 2025 16:49:10 +0000 (0:00:00.107) 0:00:23.476 ****** 2025-09-19 16:49:12.930131 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:49:12.930142 | orchestrator | 2025-09-19 16:49:12.930152 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 16:49:12.930163 | orchestrator | Friday 19 September 2025 16:49:10 +0000 (0:00:00.104) 0:00:23.581 ****** 2025-09-19 16:49:12.930174 | orchestrator | changed: [testbed-node-4] => { 2025-09-19 16:49:12.930184 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 16:49:12.930195 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:12.930206 | orchestrator |  "sdb": { 2025-09-19 16:49:12.930217 | orchestrator |  "osd_lvm_uuid": "6bee08d2-4d0c-5efd-9bb6-6357ac0256e2" 2025-09-19 16:49:12.930228 | orchestrator |  }, 2025-09-19 16:49:12.930239 | orchestrator |  "sdc": { 2025-09-19 16:49:12.930249 | orchestrator |  "osd_lvm_uuid": "c5ef3a10-bb06-5cc2-b298-3a565f19d9a7" 2025-09-19 16:49:12.930260 | orchestrator |  } 2025-09-19 16:49:12.930271 | orchestrator |  }, 2025-09-19 16:49:12.930282 | orchestrator |  "lvm_volumes": [ 2025-09-19 16:49:12.930292 | orchestrator |  { 2025-09-19 16:49:12.930303 | orchestrator |  "data": "osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2", 2025-09-19 16:49:12.930314 | orchestrator |  "data_vg": "ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2" 2025-09-19 16:49:12.930324 | orchestrator |  }, 2025-09-19 16:49:12.930335 | orchestrator |  { 2025-09-19 16:49:12.930346 | orchestrator |  "data": "osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7", 2025-09-19 16:49:12.930357 | orchestrator |  "data_vg": "ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7" 2025-09-19 16:49:12.930367 | orchestrator |  } 2025-09-19 16:49:12.930378 | orchestrator |  ] 2025-09-19 16:49:12.930389 | orchestrator |  } 2025-09-19 16:49:12.930399 | orchestrator | } 2025-09-19 16:49:12.930410 | orchestrator | 2025-09-19 16:49:12.930421 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 16:49:12.930432 | orchestrator | Friday 19 September 2025 16:49:10 +0000 (0:00:00.177) 0:00:23.759 ****** 2025-09-19 16:49:12.930442 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 16:49:12.930453 | orchestrator | 2025-09-19 16:49:12.930464 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-19 16:49:12.930475 | orchestrator | 2025-09-19 16:49:12.930485 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:49:12.930496 | orchestrator | Friday 19 September 2025 16:49:11 +0000 (0:00:01.001) 0:00:24.761 ****** 2025-09-19 16:49:12.930507 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 16:49:12.930517 | orchestrator | 2025-09-19 16:49:12.930528 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:49:12.930539 | orchestrator | Friday 19 September 2025 16:49:11 +0000 (0:00:00.475) 0:00:25.236 ****** 2025-09-19 16:49:12.930558 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:49:12.930568 | orchestrator | 2025-09-19 16:49:12.930579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:12.930590 | orchestrator | Friday 19 September 2025 16:49:12 +0000 (0:00:00.675) 0:00:25.911 ****** 2025-09-19 16:49:12.930609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 16:49:12.930620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 16:49:12.930631 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 16:49:12.930641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 16:49:12.930652 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 16:49:12.930662 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 16:49:12.930684 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 16:49:20.327695 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 16:49:20.327852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 16:49:20.327869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 16:49:20.327880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 16:49:20.327892 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 16:49:20.327903 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 16:49:20.327914 | orchestrator | 2025-09-19 16:49:20.327926 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.327938 | orchestrator | Friday 19 September 2025 16:49:12 +0000 (0:00:00.394) 0:00:26.306 ****** 2025-09-19 16:49:20.327949 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.327961 | orchestrator | 2025-09-19 16:49:20.327972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.327984 | orchestrator | Friday 19 September 2025 16:49:13 +0000 (0:00:00.197) 0:00:26.504 ****** 2025-09-19 16:49:20.327994 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328005 | orchestrator | 2025-09-19 16:49:20.328016 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328027 | orchestrator | Friday 19 September 2025 16:49:13 +0000 (0:00:00.201) 0:00:26.705 ****** 2025-09-19 16:49:20.328038 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328049 | orchestrator | 2025-09-19 16:49:20.328060 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328070 | orchestrator | Friday 19 September 2025 16:49:13 +0000 (0:00:00.209) 0:00:26.915 ****** 2025-09-19 16:49:20.328081 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328092 | orchestrator | 2025-09-19 16:49:20.328103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328114 | orchestrator | Friday 19 September 2025 16:49:13 +0000 (0:00:00.215) 0:00:27.131 ****** 2025-09-19 16:49:20.328124 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328135 | orchestrator | 2025-09-19 16:49:20.328146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328157 | orchestrator | Friday 19 September 2025 16:49:13 +0000 (0:00:00.182) 0:00:27.313 ****** 2025-09-19 16:49:20.328168 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328178 | orchestrator | 2025-09-19 16:49:20.328189 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328200 | orchestrator | Friday 19 September 2025 16:49:14 +0000 (0:00:00.198) 0:00:27.511 ****** 2025-09-19 16:49:20.328211 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328246 | orchestrator | 2025-09-19 16:49:20.328259 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328271 | orchestrator | Friday 19 September 2025 16:49:14 +0000 (0:00:00.186) 0:00:27.698 ****** 2025-09-19 16:49:20.328283 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328296 | orchestrator | 2025-09-19 16:49:20.328308 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328320 | orchestrator | Friday 19 September 2025 16:49:14 +0000 (0:00:00.202) 0:00:27.900 ****** 2025-09-19 16:49:20.328333 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf) 2025-09-19 16:49:20.328346 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf) 2025-09-19 16:49:20.328359 | orchestrator | 2025-09-19 16:49:20.328371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328383 | orchestrator | Friday 19 September 2025 16:49:15 +0000 (0:00:00.613) 0:00:28.514 ****** 2025-09-19 16:49:20.328395 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e) 2025-09-19 16:49:20.328406 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e) 2025-09-19 16:49:20.328419 | orchestrator | 2025-09-19 16:49:20.328431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328443 | orchestrator | Friday 19 September 2025 16:49:15 +0000 (0:00:00.628) 0:00:29.143 ****** 2025-09-19 16:49:20.328455 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1) 2025-09-19 16:49:20.328468 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1) 2025-09-19 16:49:20.328480 | orchestrator | 2025-09-19 16:49:20.328493 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328505 | orchestrator | Friday 19 September 2025 16:49:16 +0000 (0:00:00.380) 0:00:29.523 ****** 2025-09-19 16:49:20.328517 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5) 2025-09-19 16:49:20.328529 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5) 2025-09-19 16:49:20.328542 | orchestrator | 2025-09-19 16:49:20.328554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:49:20.328565 | orchestrator | Friday 19 September 2025 16:49:16 +0000 (0:00:00.364) 0:00:29.888 ****** 2025-09-19 16:49:20.328576 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:49:20.328587 | orchestrator | 2025-09-19 16:49:20.328598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.328609 | orchestrator | Friday 19 September 2025 16:49:16 +0000 (0:00:00.302) 0:00:30.191 ****** 2025-09-19 16:49:20.328637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 16:49:20.328649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 16:49:20.328660 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 16:49:20.328670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 16:49:20.328681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 16:49:20.328692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 16:49:20.328703 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 16:49:20.328713 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 16:49:20.328725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 16:49:20.328762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 16:49:20.328773 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 16:49:20.328784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 16:49:20.328812 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 16:49:20.328823 | orchestrator | 2025-09-19 16:49:20.328834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.328845 | orchestrator | Friday 19 September 2025 16:49:17 +0000 (0:00:00.339) 0:00:30.530 ****** 2025-09-19 16:49:20.328856 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328866 | orchestrator | 2025-09-19 16:49:20.328878 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.328888 | orchestrator | Friday 19 September 2025 16:49:17 +0000 (0:00:00.190) 0:00:30.720 ****** 2025-09-19 16:49:20.328899 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328910 | orchestrator | 2025-09-19 16:49:20.328921 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.328932 | orchestrator | Friday 19 September 2025 16:49:17 +0000 (0:00:00.198) 0:00:30.919 ****** 2025-09-19 16:49:20.328942 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.328953 | orchestrator | 2025-09-19 16:49:20.328969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.328980 | orchestrator | Friday 19 September 2025 16:49:17 +0000 (0:00:00.187) 0:00:31.107 ****** 2025-09-19 16:49:20.328991 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329002 | orchestrator | 2025-09-19 16:49:20.329013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329023 | orchestrator | Friday 19 September 2025 16:49:17 +0000 (0:00:00.199) 0:00:31.307 ****** 2025-09-19 16:49:20.329034 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329045 | orchestrator | 2025-09-19 16:49:20.329056 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329067 | orchestrator | Friday 19 September 2025 16:49:18 +0000 (0:00:00.175) 0:00:31.482 ****** 2025-09-19 16:49:20.329077 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329088 | orchestrator | 2025-09-19 16:49:20.329099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329110 | orchestrator | Friday 19 September 2025 16:49:18 +0000 (0:00:00.459) 0:00:31.941 ****** 2025-09-19 16:49:20.329120 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329131 | orchestrator | 2025-09-19 16:49:20.329142 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329153 | orchestrator | Friday 19 September 2025 16:49:18 +0000 (0:00:00.165) 0:00:32.107 ****** 2025-09-19 16:49:20.329164 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329175 | orchestrator | 2025-09-19 16:49:20.329186 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329196 | orchestrator | Friday 19 September 2025 16:49:18 +0000 (0:00:00.209) 0:00:32.317 ****** 2025-09-19 16:49:20.329207 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 16:49:20.329218 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 16:49:20.329229 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 16:49:20.329240 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 16:49:20.329250 | orchestrator | 2025-09-19 16:49:20.329261 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329272 | orchestrator | Friday 19 September 2025 16:49:19 +0000 (0:00:00.616) 0:00:32.934 ****** 2025-09-19 16:49:20.329283 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329294 | orchestrator | 2025-09-19 16:49:20.329304 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329323 | orchestrator | Friday 19 September 2025 16:49:19 +0000 (0:00:00.192) 0:00:33.127 ****** 2025-09-19 16:49:20.329334 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329345 | orchestrator | 2025-09-19 16:49:20.329356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329367 | orchestrator | Friday 19 September 2025 16:49:19 +0000 (0:00:00.182) 0:00:33.309 ****** 2025-09-19 16:49:20.329377 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329388 | orchestrator | 2025-09-19 16:49:20.329399 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:49:20.329410 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.197) 0:00:33.507 ****** 2025-09-19 16:49:20.329421 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:20.329432 | orchestrator | 2025-09-19 16:49:20.329442 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-19 16:49:20.329459 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.197) 0:00:33.705 ****** 2025-09-19 16:49:24.681522 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-19 16:49:24.681629 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-19 16:49:24.681645 | orchestrator | 2025-09-19 16:49:24.681659 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-19 16:49:24.681671 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.193) 0:00:33.898 ****** 2025-09-19 16:49:24.681682 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.681694 | orchestrator | 2025-09-19 16:49:24.681705 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-19 16:49:24.681716 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.135) 0:00:34.033 ****** 2025-09-19 16:49:24.681727 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.681738 | orchestrator | 2025-09-19 16:49:24.681749 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-19 16:49:24.681760 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.110) 0:00:34.144 ****** 2025-09-19 16:49:24.681771 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.681782 | orchestrator | 2025-09-19 16:49:24.681849 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-19 16:49:24.681861 | orchestrator | Friday 19 September 2025 16:49:20 +0000 (0:00:00.130) 0:00:34.274 ****** 2025-09-19 16:49:24.681872 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:49:24.681883 | orchestrator | 2025-09-19 16:49:24.681895 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-19 16:49:24.681905 | orchestrator | Friday 19 September 2025 16:49:21 +0000 (0:00:00.346) 0:00:34.621 ****** 2025-09-19 16:49:24.681918 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4de995f9-e371-53ec-a5e6-95298d442fa2'}}) 2025-09-19 16:49:24.681930 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}}) 2025-09-19 16:49:24.681940 | orchestrator | 2025-09-19 16:49:24.681951 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-19 16:49:24.681962 | orchestrator | Friday 19 September 2025 16:49:21 +0000 (0:00:00.179) 0:00:34.801 ****** 2025-09-19 16:49:24.681974 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4de995f9-e371-53ec-a5e6-95298d442fa2'}})  2025-09-19 16:49:24.681987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}})  2025-09-19 16:49:24.681998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682009 | orchestrator | 2025-09-19 16:49:24.682074 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-19 16:49:24.682122 | orchestrator | Friday 19 September 2025 16:49:21 +0000 (0:00:00.180) 0:00:34.981 ****** 2025-09-19 16:49:24.682136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4de995f9-e371-53ec-a5e6-95298d442fa2'}})  2025-09-19 16:49:24.682177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}})  2025-09-19 16:49:24.682190 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682203 | orchestrator | 2025-09-19 16:49:24.682216 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-19 16:49:24.682227 | orchestrator | Friday 19 September 2025 16:49:21 +0000 (0:00:00.174) 0:00:35.155 ****** 2025-09-19 16:49:24.682238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4de995f9-e371-53ec-a5e6-95298d442fa2'}})  2025-09-19 16:49:24.682249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}})  2025-09-19 16:49:24.682260 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682271 | orchestrator | 2025-09-19 16:49:24.682282 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-19 16:49:24.682293 | orchestrator | Friday 19 September 2025 16:49:21 +0000 (0:00:00.154) 0:00:35.310 ****** 2025-09-19 16:49:24.682303 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:49:24.682314 | orchestrator | 2025-09-19 16:49:24.682342 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-19 16:49:24.682354 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.142) 0:00:35.452 ****** 2025-09-19 16:49:24.682365 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:49:24.682376 | orchestrator | 2025-09-19 16:49:24.682387 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-19 16:49:24.682398 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.141) 0:00:35.593 ****** 2025-09-19 16:49:24.682408 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682419 | orchestrator | 2025-09-19 16:49:24.682430 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-19 16:49:24.682441 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.143) 0:00:35.737 ****** 2025-09-19 16:49:24.682452 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682463 | orchestrator | 2025-09-19 16:49:24.682474 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-19 16:49:24.682484 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.144) 0:00:35.882 ****** 2025-09-19 16:49:24.682495 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682506 | orchestrator | 2025-09-19 16:49:24.682517 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-19 16:49:24.682528 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.150) 0:00:36.032 ****** 2025-09-19 16:49:24.682539 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:49:24.682550 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:24.682561 | orchestrator |  "sdb": { 2025-09-19 16:49:24.682572 | orchestrator |  "osd_lvm_uuid": "4de995f9-e371-53ec-a5e6-95298d442fa2" 2025-09-19 16:49:24.682601 | orchestrator |  }, 2025-09-19 16:49:24.682613 | orchestrator |  "sdc": { 2025-09-19 16:49:24.682624 | orchestrator |  "osd_lvm_uuid": "ea687e85-c7c1-53f3-8dfd-7d637eed1a38" 2025-09-19 16:49:24.682635 | orchestrator |  } 2025-09-19 16:49:24.682646 | orchestrator |  } 2025-09-19 16:49:24.682658 | orchestrator | } 2025-09-19 16:49:24.682669 | orchestrator | 2025-09-19 16:49:24.682680 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-19 16:49:24.682691 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.144) 0:00:36.177 ****** 2025-09-19 16:49:24.682702 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682713 | orchestrator | 2025-09-19 16:49:24.682724 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-19 16:49:24.682734 | orchestrator | Friday 19 September 2025 16:49:22 +0000 (0:00:00.133) 0:00:36.310 ****** 2025-09-19 16:49:24.682745 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682756 | orchestrator | 2025-09-19 16:49:24.682767 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-19 16:49:24.682808 | orchestrator | Friday 19 September 2025 16:49:23 +0000 (0:00:00.355) 0:00:36.666 ****** 2025-09-19 16:49:24.682821 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:49:24.682831 | orchestrator | 2025-09-19 16:49:24.682842 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-19 16:49:24.682853 | orchestrator | Friday 19 September 2025 16:49:23 +0000 (0:00:00.131) 0:00:36.797 ****** 2025-09-19 16:49:24.682864 | orchestrator | changed: [testbed-node-5] => { 2025-09-19 16:49:24.682875 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-19 16:49:24.682886 | orchestrator |  "ceph_osd_devices": { 2025-09-19 16:49:24.682896 | orchestrator |  "sdb": { 2025-09-19 16:49:24.682907 | orchestrator |  "osd_lvm_uuid": "4de995f9-e371-53ec-a5e6-95298d442fa2" 2025-09-19 16:49:24.682918 | orchestrator |  }, 2025-09-19 16:49:24.682929 | orchestrator |  "sdc": { 2025-09-19 16:49:24.682940 | orchestrator |  "osd_lvm_uuid": "ea687e85-c7c1-53f3-8dfd-7d637eed1a38" 2025-09-19 16:49:24.682951 | orchestrator |  } 2025-09-19 16:49:24.682962 | orchestrator |  }, 2025-09-19 16:49:24.682973 | orchestrator |  "lvm_volumes": [ 2025-09-19 16:49:24.682984 | orchestrator |  { 2025-09-19 16:49:24.682995 | orchestrator |  "data": "osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2", 2025-09-19 16:49:24.683006 | orchestrator |  "data_vg": "ceph-4de995f9-e371-53ec-a5e6-95298d442fa2" 2025-09-19 16:49:24.683016 | orchestrator |  }, 2025-09-19 16:49:24.683027 | orchestrator |  { 2025-09-19 16:49:24.683038 | orchestrator |  "data": "osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38", 2025-09-19 16:49:24.683050 | orchestrator |  "data_vg": "ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38" 2025-09-19 16:49:24.683060 | orchestrator |  } 2025-09-19 16:49:24.683071 | orchestrator |  ] 2025-09-19 16:49:24.683082 | orchestrator |  } 2025-09-19 16:49:24.683098 | orchestrator | } 2025-09-19 16:49:24.683109 | orchestrator | 2025-09-19 16:49:24.683120 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-19 16:49:24.683130 | orchestrator | Friday 19 September 2025 16:49:23 +0000 (0:00:00.203) 0:00:37.001 ****** 2025-09-19 16:49:24.683141 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 16:49:24.683152 | orchestrator | 2025-09-19 16:49:24.683162 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:49:24.683173 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 16:49:24.683185 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 16:49:24.683196 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 16:49:24.683207 | orchestrator | 2025-09-19 16:49:24.683217 | orchestrator | 2025-09-19 16:49:24.683228 | orchestrator | 2025-09-19 16:49:24.683239 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:49:24.683249 | orchestrator | Friday 19 September 2025 16:49:24 +0000 (0:00:01.046) 0:00:38.048 ****** 2025-09-19 16:49:24.683260 | orchestrator | =============================================================================== 2025-09-19 16:49:24.683271 | orchestrator | Write configuration file ------------------------------------------------ 3.96s 2025-09-19 16:49:24.683281 | orchestrator | Get initial list of available block devices ----------------------------- 1.12s 2025-09-19 16:49:24.683292 | orchestrator | Add known links to the list of available block devices ------------------ 1.08s 2025-09-19 16:49:24.683302 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-09-19 16:49:24.683313 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-09-19 16:49:24.683332 | orchestrator | Add known partitions to the list of available block devices ------------- 0.83s 2025-09-19 16:49:24.683343 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-09-19 16:49:24.683354 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.62s 2025-09-19 16:49:24.683364 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-09-19 16:49:24.683375 | orchestrator | Add known links to the list of available block devices ------------------ 0.61s 2025-09-19 16:49:24.683385 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.60s 2025-09-19 16:49:24.683396 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.59s 2025-09-19 16:49:24.683407 | orchestrator | Print DB devices -------------------------------------------------------- 0.59s 2025-09-19 16:49:24.683417 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-09-19 16:49:24.683435 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-09-19 16:49:25.057991 | orchestrator | Print configuration data ------------------------------------------------ 0.57s 2025-09-19 16:49:25.058137 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-09-19 16:49:25.058151 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-09-19 16:49:25.058163 | orchestrator | Set WAL devices config data --------------------------------------------- 0.51s 2025-09-19 16:49:25.058174 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.48s 2025-09-19 16:49:47.698197 | orchestrator | 2025-09-19 16:49:47 | INFO  | Task 84e12b29-73a7-415e-bc9c-fab82c3231f1 (sync inventory) is running in background. Output coming soon. 2025-09-19 16:50:12.359876 | orchestrator | 2025-09-19 16:49:48 | INFO  | Starting group_vars file reorganization 2025-09-19 16:50:12.359988 | orchestrator | 2025-09-19 16:49:49 | INFO  | Moved 0 file(s) to their respective directories 2025-09-19 16:50:12.360005 | orchestrator | 2025-09-19 16:49:49 | INFO  | Group_vars file reorganization completed 2025-09-19 16:50:12.360017 | orchestrator | 2025-09-19 16:49:51 | INFO  | Starting variable preparation from inventory 2025-09-19 16:50:12.360029 | orchestrator | 2025-09-19 16:49:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-19 16:50:12.360040 | orchestrator | 2025-09-19 16:49:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-19 16:50:12.360051 | orchestrator | 2025-09-19 16:49:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-19 16:50:12.360061 | orchestrator | 2025-09-19 16:49:54 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-19 16:50:12.360073 | orchestrator | 2025-09-19 16:49:54 | INFO  | Variable preparation completed 2025-09-19 16:50:12.360084 | orchestrator | 2025-09-19 16:49:55 | INFO  | Starting inventory overwrite handling 2025-09-19 16:50:12.360095 | orchestrator | 2025-09-19 16:49:55 | INFO  | Handling group overwrites in 99-overwrite 2025-09-19 16:50:12.360132 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group frr:children from 60-generic 2025-09-19 16:50:12.360144 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group storage:children from 50-kolla 2025-09-19 16:50:12.360155 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-19 16:50:12.360166 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-19 16:50:12.360177 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-19 16:50:12.360188 | orchestrator | 2025-09-19 16:49:55 | INFO  | Handling group overwrites in 20-roles 2025-09-19 16:50:12.360199 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-19 16:50:12.360233 | orchestrator | 2025-09-19 16:49:55 | INFO  | Removed 6 group(s) in total 2025-09-19 16:50:12.360244 | orchestrator | 2025-09-19 16:49:55 | INFO  | Inventory overwrite handling completed 2025-09-19 16:50:12.360255 | orchestrator | 2025-09-19 16:49:56 | INFO  | Starting merge of inventory files 2025-09-19 16:50:12.360266 | orchestrator | 2025-09-19 16:49:56 | INFO  | Inventory files merged successfully 2025-09-19 16:50:12.360277 | orchestrator | 2025-09-19 16:50:00 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-19 16:50:12.360287 | orchestrator | 2025-09-19 16:50:11 | INFO  | Successfully wrote ClusterShell configuration 2025-09-19 16:50:12.360298 | orchestrator | [master 098e904] 2025-09-19-16-50 2025-09-19 16:50:12.360310 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-19 16:50:14.420048 | orchestrator | 2025-09-19 16:50:14 | INFO  | Task 09b9b19f-8b5d-47eb-b843-7d607ad4e1cd (ceph-create-lvm-devices) was prepared for execution. 2025-09-19 16:50:14.420144 | orchestrator | 2025-09-19 16:50:14 | INFO  | It takes a moment until task 09b9b19f-8b5d-47eb-b843-7d607ad4e1cd (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-19 16:50:25.631590 | orchestrator | 2025-09-19 16:50:25.631717 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 16:50:25.631735 | orchestrator | 2025-09-19 16:50:25.631748 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:50:25.631760 | orchestrator | Friday 19 September 2025 16:50:18 +0000 (0:00:00.278) 0:00:00.278 ****** 2025-09-19 16:50:25.631776 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 16:50:25.631794 | orchestrator | 2025-09-19 16:50:25.631877 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:50:25.631897 | orchestrator | Friday 19 September 2025 16:50:18 +0000 (0:00:00.277) 0:00:00.556 ****** 2025-09-19 16:50:25.631917 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:25.631937 | orchestrator | 2025-09-19 16:50:25.631952 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.631964 | orchestrator | Friday 19 September 2025 16:50:18 +0000 (0:00:00.217) 0:00:00.774 ****** 2025-09-19 16:50:25.631975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-19 16:50:25.631988 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-19 16:50:25.631999 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-19 16:50:25.632010 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-19 16:50:25.632021 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-19 16:50:25.632031 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-19 16:50:25.632042 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-19 16:50:25.632053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-19 16:50:25.632063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-19 16:50:25.632074 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-19 16:50:25.632085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-19 16:50:25.632097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-19 16:50:25.632109 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-19 16:50:25.632121 | orchestrator | 2025-09-19 16:50:25.632133 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632169 | orchestrator | Friday 19 September 2025 16:50:19 +0000 (0:00:00.408) 0:00:01.182 ****** 2025-09-19 16:50:25.632182 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632195 | orchestrator | 2025-09-19 16:50:25.632207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632219 | orchestrator | Friday 19 September 2025 16:50:19 +0000 (0:00:00.447) 0:00:01.630 ****** 2025-09-19 16:50:25.632231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632243 | orchestrator | 2025-09-19 16:50:25.632255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632267 | orchestrator | Friday 19 September 2025 16:50:19 +0000 (0:00:00.198) 0:00:01.828 ****** 2025-09-19 16:50:25.632279 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632291 | orchestrator | 2025-09-19 16:50:25.632303 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632315 | orchestrator | Friday 19 September 2025 16:50:19 +0000 (0:00:00.191) 0:00:02.019 ****** 2025-09-19 16:50:25.632327 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632339 | orchestrator | 2025-09-19 16:50:25.632352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632364 | orchestrator | Friday 19 September 2025 16:50:20 +0000 (0:00:00.207) 0:00:02.226 ****** 2025-09-19 16:50:25.632376 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632388 | orchestrator | 2025-09-19 16:50:25.632400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632412 | orchestrator | Friday 19 September 2025 16:50:20 +0000 (0:00:00.202) 0:00:02.429 ****** 2025-09-19 16:50:25.632424 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632437 | orchestrator | 2025-09-19 16:50:25.632449 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632460 | orchestrator | Friday 19 September 2025 16:50:20 +0000 (0:00:00.202) 0:00:02.632 ****** 2025-09-19 16:50:25.632471 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632481 | orchestrator | 2025-09-19 16:50:25.632492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632503 | orchestrator | Friday 19 September 2025 16:50:20 +0000 (0:00:00.184) 0:00:02.817 ****** 2025-09-19 16:50:25.632513 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.632524 | orchestrator | 2025-09-19 16:50:25.632534 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632545 | orchestrator | Friday 19 September 2025 16:50:20 +0000 (0:00:00.193) 0:00:03.010 ****** 2025-09-19 16:50:25.632556 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989) 2025-09-19 16:50:25.632568 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989) 2025-09-19 16:50:25.632578 | orchestrator | 2025-09-19 16:50:25.632589 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632600 | orchestrator | Friday 19 September 2025 16:50:21 +0000 (0:00:00.395) 0:00:03.406 ****** 2025-09-19 16:50:25.632631 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd) 2025-09-19 16:50:25.632643 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd) 2025-09-19 16:50:25.632653 | orchestrator | 2025-09-19 16:50:25.632664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632675 | orchestrator | Friday 19 September 2025 16:50:21 +0000 (0:00:00.416) 0:00:03.823 ****** 2025-09-19 16:50:25.632685 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363) 2025-09-19 16:50:25.632696 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363) 2025-09-19 16:50:25.632707 | orchestrator | 2025-09-19 16:50:25.632718 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632735 | orchestrator | Friday 19 September 2025 16:50:22 +0000 (0:00:00.649) 0:00:04.472 ****** 2025-09-19 16:50:25.632746 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5) 2025-09-19 16:50:25.632757 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5) 2025-09-19 16:50:25.632767 | orchestrator | 2025-09-19 16:50:25.632778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:25.632789 | orchestrator | Friday 19 September 2025 16:50:23 +0000 (0:00:00.863) 0:00:05.335 ****** 2025-09-19 16:50:25.632820 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:50:25.632831 | orchestrator | 2025-09-19 16:50:25.632842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.632853 | orchestrator | Friday 19 September 2025 16:50:23 +0000 (0:00:00.331) 0:00:05.667 ****** 2025-09-19 16:50:25.632863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-19 16:50:25.632874 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-19 16:50:25.632884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-19 16:50:25.632895 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-19 16:50:25.632905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-19 16:50:25.632916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-19 16:50:25.632926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-19 16:50:25.632937 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-19 16:50:25.632967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-19 16:50:25.632978 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-19 16:50:25.632989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-19 16:50:25.633000 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-19 16:50:25.633015 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-19 16:50:25.633026 | orchestrator | 2025-09-19 16:50:25.633037 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633048 | orchestrator | Friday 19 September 2025 16:50:23 +0000 (0:00:00.409) 0:00:06.076 ****** 2025-09-19 16:50:25.633059 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633069 | orchestrator | 2025-09-19 16:50:25.633080 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633091 | orchestrator | Friday 19 September 2025 16:50:24 +0000 (0:00:00.200) 0:00:06.277 ****** 2025-09-19 16:50:25.633102 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633112 | orchestrator | 2025-09-19 16:50:25.633123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633134 | orchestrator | Friday 19 September 2025 16:50:24 +0000 (0:00:00.213) 0:00:06.491 ****** 2025-09-19 16:50:25.633144 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633155 | orchestrator | 2025-09-19 16:50:25.633166 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633176 | orchestrator | Friday 19 September 2025 16:50:24 +0000 (0:00:00.199) 0:00:06.691 ****** 2025-09-19 16:50:25.633187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633197 | orchestrator | 2025-09-19 16:50:25.633208 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633226 | orchestrator | Friday 19 September 2025 16:50:24 +0000 (0:00:00.218) 0:00:06.909 ****** 2025-09-19 16:50:25.633237 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633247 | orchestrator | 2025-09-19 16:50:25.633258 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633269 | orchestrator | Friday 19 September 2025 16:50:25 +0000 (0:00:00.208) 0:00:07.118 ****** 2025-09-19 16:50:25.633279 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633290 | orchestrator | 2025-09-19 16:50:25.633301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633312 | orchestrator | Friday 19 September 2025 16:50:25 +0000 (0:00:00.208) 0:00:07.326 ****** 2025-09-19 16:50:25.633322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:25.633333 | orchestrator | 2025-09-19 16:50:25.633344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:25.633355 | orchestrator | Friday 19 September 2025 16:50:25 +0000 (0:00:00.191) 0:00:07.518 ****** 2025-09-19 16:50:25.633372 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166365 | orchestrator | 2025-09-19 16:50:34.166457 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:34.166470 | orchestrator | Friday 19 September 2025 16:50:25 +0000 (0:00:00.201) 0:00:07.720 ****** 2025-09-19 16:50:34.166478 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-19 16:50:34.166487 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-19 16:50:34.166495 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-19 16:50:34.166502 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-19 16:50:34.166509 | orchestrator | 2025-09-19 16:50:34.166517 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:34.166524 | orchestrator | Friday 19 September 2025 16:50:26 +0000 (0:00:01.117) 0:00:08.837 ****** 2025-09-19 16:50:34.166532 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166539 | orchestrator | 2025-09-19 16:50:34.166546 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:34.166553 | orchestrator | Friday 19 September 2025 16:50:26 +0000 (0:00:00.214) 0:00:09.051 ****** 2025-09-19 16:50:34.166561 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166568 | orchestrator | 2025-09-19 16:50:34.166575 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:34.166582 | orchestrator | Friday 19 September 2025 16:50:27 +0000 (0:00:00.217) 0:00:09.268 ****** 2025-09-19 16:50:34.166589 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166596 | orchestrator | 2025-09-19 16:50:34.166604 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:34.166611 | orchestrator | Friday 19 September 2025 16:50:27 +0000 (0:00:00.193) 0:00:09.462 ****** 2025-09-19 16:50:34.166619 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166626 | orchestrator | 2025-09-19 16:50:34.166633 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 16:50:34.166640 | orchestrator | Friday 19 September 2025 16:50:27 +0000 (0:00:00.217) 0:00:09.679 ****** 2025-09-19 16:50:34.166647 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166654 | orchestrator | 2025-09-19 16:50:34.166661 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 16:50:34.166668 | orchestrator | Friday 19 September 2025 16:50:27 +0000 (0:00:00.135) 0:00:09.814 ****** 2025-09-19 16:50:34.166676 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '502e1679-2b8a-59ad-b2cc-f53252d80a70'}}) 2025-09-19 16:50:34.166684 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '189b9442-6cba-5a76-9378-3098f039bcec'}}) 2025-09-19 16:50:34.166691 | orchestrator | 2025-09-19 16:50:34.166698 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 16:50:34.166705 | orchestrator | Friday 19 September 2025 16:50:27 +0000 (0:00:00.206) 0:00:10.021 ****** 2025-09-19 16:50:34.166713 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'}) 2025-09-19 16:50:34.166736 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'}) 2025-09-19 16:50:34.166743 | orchestrator | 2025-09-19 16:50:34.166750 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 16:50:34.166769 | orchestrator | Friday 19 September 2025 16:50:29 +0000 (0:00:01.958) 0:00:11.980 ****** 2025-09-19 16:50:34.166777 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.166785 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.166792 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166799 | orchestrator | 2025-09-19 16:50:34.166858 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 16:50:34.166865 | orchestrator | Friday 19 September 2025 16:50:30 +0000 (0:00:00.165) 0:00:12.145 ****** 2025-09-19 16:50:34.166873 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'}) 2025-09-19 16:50:34.166880 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'}) 2025-09-19 16:50:34.166887 | orchestrator | 2025-09-19 16:50:34.166894 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 16:50:34.166901 | orchestrator | Friday 19 September 2025 16:50:31 +0000 (0:00:01.577) 0:00:13.722 ****** 2025-09-19 16:50:34.166909 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.166917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.166925 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166933 | orchestrator | 2025-09-19 16:50:34.166942 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 16:50:34.166950 | orchestrator | Friday 19 September 2025 16:50:31 +0000 (0:00:00.195) 0:00:13.918 ****** 2025-09-19 16:50:34.166958 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.166966 | orchestrator | 2025-09-19 16:50:34.166974 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 16:50:34.166996 | orchestrator | Friday 19 September 2025 16:50:31 +0000 (0:00:00.170) 0:00:14.088 ****** 2025-09-19 16:50:34.167004 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167012 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167020 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167028 | orchestrator | 2025-09-19 16:50:34.167036 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 16:50:34.167044 | orchestrator | Friday 19 September 2025 16:50:32 +0000 (0:00:00.471) 0:00:14.559 ****** 2025-09-19 16:50:34.167052 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167061 | orchestrator | 2025-09-19 16:50:34.167069 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 16:50:34.167077 | orchestrator | Friday 19 September 2025 16:50:32 +0000 (0:00:00.178) 0:00:14.738 ****** 2025-09-19 16:50:34.167085 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167101 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167109 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167117 | orchestrator | 2025-09-19 16:50:34.167124 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 16:50:34.167133 | orchestrator | Friday 19 September 2025 16:50:32 +0000 (0:00:00.200) 0:00:14.938 ****** 2025-09-19 16:50:34.167141 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167149 | orchestrator | 2025-09-19 16:50:34.167157 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 16:50:34.167165 | orchestrator | Friday 19 September 2025 16:50:32 +0000 (0:00:00.154) 0:00:15.093 ****** 2025-09-19 16:50:34.167173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167181 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167189 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167197 | orchestrator | 2025-09-19 16:50:34.167205 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 16:50:34.167213 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.187) 0:00:15.281 ****** 2025-09-19 16:50:34.167221 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:34.167229 | orchestrator | 2025-09-19 16:50:34.167236 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 16:50:34.167244 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.148) 0:00:15.430 ****** 2025-09-19 16:50:34.167253 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167261 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167269 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167277 | orchestrator | 2025-09-19 16:50:34.167285 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 16:50:34.167298 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.174) 0:00:15.605 ****** 2025-09-19 16:50:34.167305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167313 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167320 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167327 | orchestrator | 2025-09-19 16:50:34.167334 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 16:50:34.167341 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.156) 0:00:15.761 ****** 2025-09-19 16:50:34.167348 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:34.167355 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:34.167363 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167370 | orchestrator | 2025-09-19 16:50:34.167377 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 16:50:34.167384 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.152) 0:00:15.914 ****** 2025-09-19 16:50:34.167391 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167403 | orchestrator | 2025-09-19 16:50:34.167410 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 16:50:34.167417 | orchestrator | Friday 19 September 2025 16:50:33 +0000 (0:00:00.177) 0:00:16.091 ****** 2025-09-19 16:50:34.167424 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:34.167431 | orchestrator | 2025-09-19 16:50:34.167442 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 16:50:40.918012 | orchestrator | Friday 19 September 2025 16:50:34 +0000 (0:00:00.162) 0:00:16.253 ****** 2025-09-19 16:50:40.918162 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918179 | orchestrator | 2025-09-19 16:50:40.918190 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 16:50:40.918202 | orchestrator | Friday 19 September 2025 16:50:34 +0000 (0:00:00.132) 0:00:16.386 ****** 2025-09-19 16:50:40.918212 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:50:40.918223 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 16:50:40.918233 | orchestrator | } 2025-09-19 16:50:40.918243 | orchestrator | 2025-09-19 16:50:40.918253 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 16:50:40.918263 | orchestrator | Friday 19 September 2025 16:50:34 +0000 (0:00:00.357) 0:00:16.744 ****** 2025-09-19 16:50:40.918272 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:50:40.918283 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 16:50:40.918292 | orchestrator | } 2025-09-19 16:50:40.918302 | orchestrator | 2025-09-19 16:50:40.918313 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 16:50:40.918323 | orchestrator | Friday 19 September 2025 16:50:34 +0000 (0:00:00.163) 0:00:16.907 ****** 2025-09-19 16:50:40.918332 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:50:40.918342 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 16:50:40.918351 | orchestrator | } 2025-09-19 16:50:40.918361 | orchestrator | 2025-09-19 16:50:40.918370 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 16:50:40.918380 | orchestrator | Friday 19 September 2025 16:50:34 +0000 (0:00:00.136) 0:00:17.044 ****** 2025-09-19 16:50:40.918388 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:40.918398 | orchestrator | 2025-09-19 16:50:40.918408 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 16:50:40.918417 | orchestrator | Friday 19 September 2025 16:50:35 +0000 (0:00:00.641) 0:00:17.685 ****** 2025-09-19 16:50:40.918426 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:40.918435 | orchestrator | 2025-09-19 16:50:40.918445 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 16:50:40.918454 | orchestrator | Friday 19 September 2025 16:50:36 +0000 (0:00:00.569) 0:00:18.255 ****** 2025-09-19 16:50:40.918462 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:40.918471 | orchestrator | 2025-09-19 16:50:40.918481 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 16:50:40.918491 | orchestrator | Friday 19 September 2025 16:50:36 +0000 (0:00:00.530) 0:00:18.785 ****** 2025-09-19 16:50:40.918500 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:40.918510 | orchestrator | 2025-09-19 16:50:40.918520 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 16:50:40.918530 | orchestrator | Friday 19 September 2025 16:50:36 +0000 (0:00:00.157) 0:00:18.943 ****** 2025-09-19 16:50:40.918540 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918550 | orchestrator | 2025-09-19 16:50:40.918561 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 16:50:40.918571 | orchestrator | Friday 19 September 2025 16:50:36 +0000 (0:00:00.101) 0:00:19.044 ****** 2025-09-19 16:50:40.918582 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918593 | orchestrator | 2025-09-19 16:50:40.918603 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 16:50:40.918612 | orchestrator | Friday 19 September 2025 16:50:37 +0000 (0:00:00.125) 0:00:19.169 ****** 2025-09-19 16:50:40.918648 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:50:40.918660 | orchestrator |  "vgs_report": { 2025-09-19 16:50:40.918686 | orchestrator |  "vg": [] 2025-09-19 16:50:40.918697 | orchestrator |  } 2025-09-19 16:50:40.918707 | orchestrator | } 2025-09-19 16:50:40.918716 | orchestrator | 2025-09-19 16:50:40.918726 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 16:50:40.918736 | orchestrator | Friday 19 September 2025 16:50:37 +0000 (0:00:00.187) 0:00:19.357 ****** 2025-09-19 16:50:40.918745 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918754 | orchestrator | 2025-09-19 16:50:40.918764 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 16:50:40.918774 | orchestrator | Friday 19 September 2025 16:50:37 +0000 (0:00:00.168) 0:00:19.526 ****** 2025-09-19 16:50:40.918784 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918794 | orchestrator | 2025-09-19 16:50:40.918831 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 16:50:40.918841 | orchestrator | Friday 19 September 2025 16:50:37 +0000 (0:00:00.167) 0:00:19.693 ****** 2025-09-19 16:50:40.918850 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918859 | orchestrator | 2025-09-19 16:50:40.918868 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 16:50:40.918879 | orchestrator | Friday 19 September 2025 16:50:37 +0000 (0:00:00.309) 0:00:20.003 ****** 2025-09-19 16:50:40.918889 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918900 | orchestrator | 2025-09-19 16:50:40.918910 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 16:50:40.918918 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.155) 0:00:20.159 ****** 2025-09-19 16:50:40.918927 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918936 | orchestrator | 2025-09-19 16:50:40.918945 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 16:50:40.918956 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.138) 0:00:20.297 ****** 2025-09-19 16:50:40.918965 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.918974 | orchestrator | 2025-09-19 16:50:40.918982 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 16:50:40.918991 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.255) 0:00:20.552 ****** 2025-09-19 16:50:40.919000 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919009 | orchestrator | 2025-09-19 16:50:40.919017 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 16:50:40.919026 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.167) 0:00:20.720 ****** 2025-09-19 16:50:40.919034 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919043 | orchestrator | 2025-09-19 16:50:40.919052 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 16:50:40.919081 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.153) 0:00:20.873 ****** 2025-09-19 16:50:40.919092 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919101 | orchestrator | 2025-09-19 16:50:40.919110 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 16:50:40.919119 | orchestrator | Friday 19 September 2025 16:50:38 +0000 (0:00:00.139) 0:00:21.013 ****** 2025-09-19 16:50:40.919126 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919134 | orchestrator | 2025-09-19 16:50:40.919142 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 16:50:40.919149 | orchestrator | Friday 19 September 2025 16:50:39 +0000 (0:00:00.145) 0:00:21.158 ****** 2025-09-19 16:50:40.919157 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919165 | orchestrator | 2025-09-19 16:50:40.919172 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 16:50:40.919180 | orchestrator | Friday 19 September 2025 16:50:39 +0000 (0:00:00.133) 0:00:21.292 ****** 2025-09-19 16:50:40.919187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919196 | orchestrator | 2025-09-19 16:50:40.919216 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 16:50:40.919223 | orchestrator | Friday 19 September 2025 16:50:39 +0000 (0:00:00.141) 0:00:21.434 ****** 2025-09-19 16:50:40.919231 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919239 | orchestrator | 2025-09-19 16:50:40.919246 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 16:50:40.919253 | orchestrator | Friday 19 September 2025 16:50:39 +0000 (0:00:00.147) 0:00:21.582 ****** 2025-09-19 16:50:40.919262 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919270 | orchestrator | 2025-09-19 16:50:40.919278 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 16:50:40.919286 | orchestrator | Friday 19 September 2025 16:50:39 +0000 (0:00:00.134) 0:00:21.717 ****** 2025-09-19 16:50:40.919294 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919304 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:40.919311 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919319 | orchestrator | 2025-09-19 16:50:40.919326 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 16:50:40.919334 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.388) 0:00:22.106 ****** 2025-09-19 16:50:40.919343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919351 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:40.919360 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919368 | orchestrator | 2025-09-19 16:50:40.919377 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 16:50:40.919386 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.198) 0:00:22.305 ****** 2025-09-19 16:50:40.919395 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919404 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:40.919413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919421 | orchestrator | 2025-09-19 16:50:40.919429 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 16:50:40.919437 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.177) 0:00:22.482 ****** 2025-09-19 16:50:40.919458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919466 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:40.919474 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919482 | orchestrator | 2025-09-19 16:50:40.919491 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 16:50:40.919500 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.172) 0:00:22.654 ****** 2025-09-19 16:50:40.919509 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919518 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:40.919527 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:40.919543 | orchestrator | 2025-09-19 16:50:40.919552 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 16:50:40.919560 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.161) 0:00:22.815 ****** 2025-09-19 16:50:40.919570 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:40.919587 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.403540 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.403637 | orchestrator | 2025-09-19 16:50:46.403652 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 16:50:46.403665 | orchestrator | Friday 19 September 2025 16:50:40 +0000 (0:00:00.191) 0:00:23.006 ****** 2025-09-19 16:50:46.403692 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:46.403704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.403714 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.403724 | orchestrator | 2025-09-19 16:50:46.403734 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 16:50:46.403744 | orchestrator | Friday 19 September 2025 16:50:41 +0000 (0:00:00.193) 0:00:23.200 ****** 2025-09-19 16:50:46.403754 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:46.403764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.403774 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.403784 | orchestrator | 2025-09-19 16:50:46.403794 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 16:50:46.403855 | orchestrator | Friday 19 September 2025 16:50:41 +0000 (0:00:00.184) 0:00:23.384 ****** 2025-09-19 16:50:46.403868 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:46.403879 | orchestrator | 2025-09-19 16:50:46.403889 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 16:50:46.403898 | orchestrator | Friday 19 September 2025 16:50:41 +0000 (0:00:00.584) 0:00:23.968 ****** 2025-09-19 16:50:46.403908 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:46.403917 | orchestrator | 2025-09-19 16:50:46.403938 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 16:50:46.403957 | orchestrator | Friday 19 September 2025 16:50:42 +0000 (0:00:00.542) 0:00:24.511 ****** 2025-09-19 16:50:46.403967 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:50:46.403976 | orchestrator | 2025-09-19 16:50:46.403986 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 16:50:46.403996 | orchestrator | Friday 19 September 2025 16:50:42 +0000 (0:00:00.149) 0:00:24.660 ****** 2025-09-19 16:50:46.404006 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'vg_name': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'}) 2025-09-19 16:50:46.404017 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'vg_name': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'}) 2025-09-19 16:50:46.404026 | orchestrator | 2025-09-19 16:50:46.404041 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 16:50:46.404051 | orchestrator | Friday 19 September 2025 16:50:42 +0000 (0:00:00.171) 0:00:24.831 ****** 2025-09-19 16:50:46.404061 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:46.404093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.404105 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.404117 | orchestrator | 2025-09-19 16:50:46.404128 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 16:50:46.404139 | orchestrator | Friday 19 September 2025 16:50:43 +0000 (0:00:00.359) 0:00:25.191 ****** 2025-09-19 16:50:46.404150 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:46.404161 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.404172 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.404182 | orchestrator | 2025-09-19 16:50:46.404194 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 16:50:46.404205 | orchestrator | Friday 19 September 2025 16:50:43 +0000 (0:00:00.164) 0:00:25.355 ****** 2025-09-19 16:50:46.404216 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'})  2025-09-19 16:50:46.404227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'})  2025-09-19 16:50:46.404238 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:50:46.404249 | orchestrator | 2025-09-19 16:50:46.404260 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 16:50:46.404271 | orchestrator | Friday 19 September 2025 16:50:43 +0000 (0:00:00.159) 0:00:25.515 ****** 2025-09-19 16:50:46.404282 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 16:50:46.404293 | orchestrator |  "lvm_report": { 2025-09-19 16:50:46.404305 | orchestrator |  "lv": [ 2025-09-19 16:50:46.404315 | orchestrator |  { 2025-09-19 16:50:46.404341 | orchestrator |  "lv_name": "osd-block-189b9442-6cba-5a76-9378-3098f039bcec", 2025-09-19 16:50:46.404353 | orchestrator |  "vg_name": "ceph-189b9442-6cba-5a76-9378-3098f039bcec" 2025-09-19 16:50:46.404364 | orchestrator |  }, 2025-09-19 16:50:46.404375 | orchestrator |  { 2025-09-19 16:50:46.404386 | orchestrator |  "lv_name": "osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70", 2025-09-19 16:50:46.404396 | orchestrator |  "vg_name": "ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70" 2025-09-19 16:50:46.404407 | orchestrator |  } 2025-09-19 16:50:46.404418 | orchestrator |  ], 2025-09-19 16:50:46.404429 | orchestrator |  "pv": [ 2025-09-19 16:50:46.404440 | orchestrator |  { 2025-09-19 16:50:46.404451 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 16:50:46.404461 | orchestrator |  "vg_name": "ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70" 2025-09-19 16:50:46.404470 | orchestrator |  }, 2025-09-19 16:50:46.404480 | orchestrator |  { 2025-09-19 16:50:46.404490 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 16:50:46.404499 | orchestrator |  "vg_name": "ceph-189b9442-6cba-5a76-9378-3098f039bcec" 2025-09-19 16:50:46.404509 | orchestrator |  } 2025-09-19 16:50:46.404519 | orchestrator |  ] 2025-09-19 16:50:46.404528 | orchestrator |  } 2025-09-19 16:50:46.404538 | orchestrator | } 2025-09-19 16:50:46.404548 | orchestrator | 2025-09-19 16:50:46.404558 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 16:50:46.404567 | orchestrator | 2025-09-19 16:50:46.404577 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:50:46.404587 | orchestrator | Friday 19 September 2025 16:50:43 +0000 (0:00:00.289) 0:00:25.804 ****** 2025-09-19 16:50:46.404596 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-19 16:50:46.404613 | orchestrator | 2025-09-19 16:50:46.404623 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:50:46.404633 | orchestrator | Friday 19 September 2025 16:50:43 +0000 (0:00:00.245) 0:00:26.049 ****** 2025-09-19 16:50:46.404643 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:50:46.404653 | orchestrator | 2025-09-19 16:50:46.404662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.404672 | orchestrator | Friday 19 September 2025 16:50:44 +0000 (0:00:00.230) 0:00:26.280 ****** 2025-09-19 16:50:46.404681 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-19 16:50:46.404691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-19 16:50:46.404701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-19 16:50:46.404710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-19 16:50:46.404720 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-19 16:50:46.404729 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-19 16:50:46.404739 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-19 16:50:46.404753 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-19 16:50:46.404763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-19 16:50:46.404772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-19 16:50:46.404782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-19 16:50:46.404791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-19 16:50:46.404801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-19 16:50:46.404828 | orchestrator | 2025-09-19 16:50:46.404838 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.404848 | orchestrator | Friday 19 September 2025 16:50:44 +0000 (0:00:00.400) 0:00:26.681 ****** 2025-09-19 16:50:46.404857 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.404867 | orchestrator | 2025-09-19 16:50:46.404877 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.404886 | orchestrator | Friday 19 September 2025 16:50:44 +0000 (0:00:00.184) 0:00:26.865 ****** 2025-09-19 16:50:46.404896 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.404906 | orchestrator | 2025-09-19 16:50:46.404915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.404925 | orchestrator | Friday 19 September 2025 16:50:44 +0000 (0:00:00.187) 0:00:27.053 ****** 2025-09-19 16:50:46.404935 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.404944 | orchestrator | 2025-09-19 16:50:46.404954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.404964 | orchestrator | Friday 19 September 2025 16:50:45 +0000 (0:00:00.578) 0:00:27.632 ****** 2025-09-19 16:50:46.404973 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.404983 | orchestrator | 2025-09-19 16:50:46.404992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.405002 | orchestrator | Friday 19 September 2025 16:50:45 +0000 (0:00:00.193) 0:00:27.825 ****** 2025-09-19 16:50:46.405012 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.405021 | orchestrator | 2025-09-19 16:50:46.405031 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.405041 | orchestrator | Friday 19 September 2025 16:50:45 +0000 (0:00:00.191) 0:00:28.017 ****** 2025-09-19 16:50:46.405051 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.405060 | orchestrator | 2025-09-19 16:50:46.405076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:46.405086 | orchestrator | Friday 19 September 2025 16:50:46 +0000 (0:00:00.242) 0:00:28.259 ****** 2025-09-19 16:50:46.405096 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:46.405106 | orchestrator | 2025-09-19 16:50:46.405121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720208 | orchestrator | Friday 19 September 2025 16:50:46 +0000 (0:00:00.232) 0:00:28.492 ****** 2025-09-19 16:50:56.720318 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.720335 | orchestrator | 2025-09-19 16:50:56.720348 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720360 | orchestrator | Friday 19 September 2025 16:50:46 +0000 (0:00:00.208) 0:00:28.700 ****** 2025-09-19 16:50:56.720371 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f) 2025-09-19 16:50:56.720384 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f) 2025-09-19 16:50:56.720395 | orchestrator | 2025-09-19 16:50:56.720406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720418 | orchestrator | Friday 19 September 2025 16:50:47 +0000 (0:00:00.442) 0:00:29.143 ****** 2025-09-19 16:50:56.720429 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73) 2025-09-19 16:50:56.720440 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73) 2025-09-19 16:50:56.720451 | orchestrator | 2025-09-19 16:50:56.720461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720472 | orchestrator | Friday 19 September 2025 16:50:47 +0000 (0:00:00.471) 0:00:29.614 ****** 2025-09-19 16:50:56.720483 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd) 2025-09-19 16:50:56.720494 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd) 2025-09-19 16:50:56.720505 | orchestrator | 2025-09-19 16:50:56.720516 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720527 | orchestrator | Friday 19 September 2025 16:50:47 +0000 (0:00:00.478) 0:00:30.092 ****** 2025-09-19 16:50:56.720537 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3) 2025-09-19 16:50:56.720548 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3) 2025-09-19 16:50:56.720559 | orchestrator | 2025-09-19 16:50:56.720570 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:50:56.720581 | orchestrator | Friday 19 September 2025 16:50:48 +0000 (0:00:00.523) 0:00:30.616 ****** 2025-09-19 16:50:56.720591 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:50:56.720602 | orchestrator | 2025-09-19 16:50:56.720613 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.720624 | orchestrator | Friday 19 September 2025 16:50:48 +0000 (0:00:00.398) 0:00:31.015 ****** 2025-09-19 16:50:56.720635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-19 16:50:56.720656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-19 16:50:56.720674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-19 16:50:56.720692 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-19 16:50:56.720709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-19 16:50:56.720727 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-19 16:50:56.720744 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-19 16:50:56.720794 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-19 16:50:56.720844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-19 16:50:56.720864 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-19 16:50:56.720883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-19 16:50:56.720894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-19 16:50:56.720905 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-19 16:50:56.720916 | orchestrator | 2025-09-19 16:50:56.720946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.720958 | orchestrator | Friday 19 September 2025 16:50:49 +0000 (0:00:00.602) 0:00:31.617 ****** 2025-09-19 16:50:56.720968 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.720979 | orchestrator | 2025-09-19 16:50:56.720990 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721001 | orchestrator | Friday 19 September 2025 16:50:49 +0000 (0:00:00.236) 0:00:31.854 ****** 2025-09-19 16:50:56.721012 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721023 | orchestrator | 2025-09-19 16:50:56.721034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721045 | orchestrator | Friday 19 September 2025 16:50:49 +0000 (0:00:00.206) 0:00:32.060 ****** 2025-09-19 16:50:56.721056 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721067 | orchestrator | 2025-09-19 16:50:56.721078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721088 | orchestrator | Friday 19 September 2025 16:50:50 +0000 (0:00:00.194) 0:00:32.254 ****** 2025-09-19 16:50:56.721099 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721110 | orchestrator | 2025-09-19 16:50:56.721140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721152 | orchestrator | Friday 19 September 2025 16:50:50 +0000 (0:00:00.211) 0:00:32.466 ****** 2025-09-19 16:50:56.721163 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721174 | orchestrator | 2025-09-19 16:50:56.721185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721196 | orchestrator | Friday 19 September 2025 16:50:50 +0000 (0:00:00.261) 0:00:32.727 ****** 2025-09-19 16:50:56.721206 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721217 | orchestrator | 2025-09-19 16:50:56.721228 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721239 | orchestrator | Friday 19 September 2025 16:50:50 +0000 (0:00:00.204) 0:00:32.931 ****** 2025-09-19 16:50:56.721250 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721261 | orchestrator | 2025-09-19 16:50:56.721272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721282 | orchestrator | Friday 19 September 2025 16:50:51 +0000 (0:00:00.185) 0:00:33.117 ****** 2025-09-19 16:50:56.721293 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721304 | orchestrator | 2025-09-19 16:50:56.721315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721326 | orchestrator | Friday 19 September 2025 16:50:51 +0000 (0:00:00.223) 0:00:33.341 ****** 2025-09-19 16:50:56.721337 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-19 16:50:56.721348 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-19 16:50:56.721359 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-19 16:50:56.721370 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-19 16:50:56.721381 | orchestrator | 2025-09-19 16:50:56.721393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721404 | orchestrator | Friday 19 September 2025 16:50:52 +0000 (0:00:00.759) 0:00:34.100 ****** 2025-09-19 16:50:56.721424 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721435 | orchestrator | 2025-09-19 16:50:56.721447 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721458 | orchestrator | Friday 19 September 2025 16:50:52 +0000 (0:00:00.170) 0:00:34.271 ****** 2025-09-19 16:50:56.721468 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721479 | orchestrator | 2025-09-19 16:50:56.721490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721501 | orchestrator | Friday 19 September 2025 16:50:52 +0000 (0:00:00.184) 0:00:34.456 ****** 2025-09-19 16:50:56.721512 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721523 | orchestrator | 2025-09-19 16:50:56.721533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:50:56.721544 | orchestrator | Friday 19 September 2025 16:50:52 +0000 (0:00:00.457) 0:00:34.913 ****** 2025-09-19 16:50:56.721555 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721566 | orchestrator | 2025-09-19 16:50:56.721577 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 16:50:56.721588 | orchestrator | Friday 19 September 2025 16:50:52 +0000 (0:00:00.180) 0:00:35.094 ****** 2025-09-19 16:50:56.721604 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721615 | orchestrator | 2025-09-19 16:50:56.721627 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 16:50:56.721638 | orchestrator | Friday 19 September 2025 16:50:53 +0000 (0:00:00.115) 0:00:35.210 ****** 2025-09-19 16:50:56.721649 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}}) 2025-09-19 16:50:56.721660 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}}) 2025-09-19 16:50:56.721671 | orchestrator | 2025-09-19 16:50:56.721682 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 16:50:56.721693 | orchestrator | Friday 19 September 2025 16:50:53 +0000 (0:00:00.181) 0:00:35.392 ****** 2025-09-19 16:50:56.721705 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}) 2025-09-19 16:50:56.721718 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}) 2025-09-19 16:50:56.721729 | orchestrator | 2025-09-19 16:50:56.721740 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 16:50:56.721751 | orchestrator | Friday 19 September 2025 16:50:55 +0000 (0:00:01.918) 0:00:37.311 ****** 2025-09-19 16:50:56.721762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:50:56.721774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:50:56.721784 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:50:56.721795 | orchestrator | 2025-09-19 16:50:56.721823 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 16:50:56.721834 | orchestrator | Friday 19 September 2025 16:50:55 +0000 (0:00:00.136) 0:00:37.447 ****** 2025-09-19 16:50:56.721845 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}) 2025-09-19 16:50:56.721856 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}) 2025-09-19 16:50:56.721867 | orchestrator | 2025-09-19 16:50:56.721885 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 16:51:02.313938 | orchestrator | Friday 19 September 2025 16:50:56 +0000 (0:00:01.359) 0:00:38.807 ****** 2025-09-19 16:51:02.314128 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314148 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314160 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314173 | orchestrator | 2025-09-19 16:51:02.314185 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 16:51:02.314197 | orchestrator | Friday 19 September 2025 16:50:56 +0000 (0:00:00.220) 0:00:39.028 ****** 2025-09-19 16:51:02.314208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314218 | orchestrator | 2025-09-19 16:51:02.314230 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 16:51:02.314241 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.145) 0:00:39.173 ****** 2025-09-19 16:51:02.314252 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314263 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314274 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314284 | orchestrator | 2025-09-19 16:51:02.314295 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 16:51:02.314306 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.152) 0:00:39.326 ****** 2025-09-19 16:51:02.314317 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314328 | orchestrator | 2025-09-19 16:51:02.314338 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 16:51:02.314349 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.137) 0:00:39.464 ****** 2025-09-19 16:51:02.314360 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314371 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314382 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314392 | orchestrator | 2025-09-19 16:51:02.314403 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 16:51:02.314414 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.151) 0:00:39.615 ****** 2025-09-19 16:51:02.314439 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314452 | orchestrator | 2025-09-19 16:51:02.314465 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 16:51:02.314477 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.317) 0:00:39.933 ****** 2025-09-19 16:51:02.314489 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314502 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314515 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314528 | orchestrator | 2025-09-19 16:51:02.314541 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 16:51:02.314553 | orchestrator | Friday 19 September 2025 16:50:57 +0000 (0:00:00.148) 0:00:40.082 ****** 2025-09-19 16:51:02.314565 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:02.314578 | orchestrator | 2025-09-19 16:51:02.314590 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 16:51:02.314603 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.158) 0:00:40.240 ****** 2025-09-19 16:51:02.314626 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314652 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314665 | orchestrator | 2025-09-19 16:51:02.314677 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 16:51:02.314690 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.159) 0:00:40.400 ****** 2025-09-19 16:51:02.314703 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314715 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314728 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314740 | orchestrator | 2025-09-19 16:51:02.314753 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 16:51:02.314765 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.157) 0:00:40.558 ****** 2025-09-19 16:51:02.314794 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:02.314845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:02.314868 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314886 | orchestrator | 2025-09-19 16:51:02.314905 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 16:51:02.314934 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.153) 0:00:40.711 ****** 2025-09-19 16:51:02.314953 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.314971 | orchestrator | 2025-09-19 16:51:02.314989 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 16:51:02.315006 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.139) 0:00:40.851 ****** 2025-09-19 16:51:02.315022 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315038 | orchestrator | 2025-09-19 16:51:02.315054 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 16:51:02.315071 | orchestrator | Friday 19 September 2025 16:50:58 +0000 (0:00:00.139) 0:00:40.991 ****** 2025-09-19 16:51:02.315088 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315105 | orchestrator | 2025-09-19 16:51:02.315123 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 16:51:02.315142 | orchestrator | Friday 19 September 2025 16:50:59 +0000 (0:00:00.130) 0:00:41.122 ****** 2025-09-19 16:51:02.315160 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:51:02.315178 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 16:51:02.315197 | orchestrator | } 2025-09-19 16:51:02.315215 | orchestrator | 2025-09-19 16:51:02.315234 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 16:51:02.315252 | orchestrator | Friday 19 September 2025 16:50:59 +0000 (0:00:00.144) 0:00:41.266 ****** 2025-09-19 16:51:02.315272 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:51:02.315284 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 16:51:02.315295 | orchestrator | } 2025-09-19 16:51:02.315306 | orchestrator | 2025-09-19 16:51:02.315317 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 16:51:02.315327 | orchestrator | Friday 19 September 2025 16:50:59 +0000 (0:00:00.147) 0:00:41.414 ****** 2025-09-19 16:51:02.315338 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:51:02.315349 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 16:51:02.315373 | orchestrator | } 2025-09-19 16:51:02.315384 | orchestrator | 2025-09-19 16:51:02.315395 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 16:51:02.315406 | orchestrator | Friday 19 September 2025 16:50:59 +0000 (0:00:00.144) 0:00:41.558 ****** 2025-09-19 16:51:02.315417 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:02.315427 | orchestrator | 2025-09-19 16:51:02.315438 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 16:51:02.315449 | orchestrator | Friday 19 September 2025 16:51:00 +0000 (0:00:00.705) 0:00:42.264 ****** 2025-09-19 16:51:02.315461 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:02.315472 | orchestrator | 2025-09-19 16:51:02.315483 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 16:51:02.315494 | orchestrator | Friday 19 September 2025 16:51:00 +0000 (0:00:00.509) 0:00:42.774 ****** 2025-09-19 16:51:02.315504 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:02.315515 | orchestrator | 2025-09-19 16:51:02.315526 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 16:51:02.315537 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.518) 0:00:43.293 ****** 2025-09-19 16:51:02.315548 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:02.315559 | orchestrator | 2025-09-19 16:51:02.315569 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 16:51:02.315580 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.158) 0:00:43.451 ****** 2025-09-19 16:51:02.315591 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315601 | orchestrator | 2025-09-19 16:51:02.315612 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 16:51:02.315623 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.123) 0:00:43.574 ****** 2025-09-19 16:51:02.315633 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315644 | orchestrator | 2025-09-19 16:51:02.315655 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 16:51:02.315666 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.122) 0:00:43.697 ****** 2025-09-19 16:51:02.315676 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:51:02.315687 | orchestrator |  "vgs_report": { 2025-09-19 16:51:02.315699 | orchestrator |  "vg": [] 2025-09-19 16:51:02.315710 | orchestrator |  } 2025-09-19 16:51:02.315721 | orchestrator | } 2025-09-19 16:51:02.315732 | orchestrator | 2025-09-19 16:51:02.315743 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 16:51:02.315757 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.133) 0:00:43.831 ****** 2025-09-19 16:51:02.315775 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315849 | orchestrator | 2025-09-19 16:51:02.315870 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 16:51:02.315886 | orchestrator | Friday 19 September 2025 16:51:01 +0000 (0:00:00.140) 0:00:43.972 ****** 2025-09-19 16:51:02.315902 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.315921 | orchestrator | 2025-09-19 16:51:02.315956 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 16:51:02.315974 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.133) 0:00:44.105 ****** 2025-09-19 16:51:02.315992 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.316011 | orchestrator | 2025-09-19 16:51:02.316029 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 16:51:02.316048 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.146) 0:00:44.251 ****** 2025-09-19 16:51:02.316066 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:02.316083 | orchestrator | 2025-09-19 16:51:02.316101 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 16:51:02.316137 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.148) 0:00:44.400 ****** 2025-09-19 16:51:07.203622 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204548 | orchestrator | 2025-09-19 16:51:07.204606 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 16:51:07.204620 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.155) 0:00:44.556 ****** 2025-09-19 16:51:07.204631 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204642 | orchestrator | 2025-09-19 16:51:07.204653 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 16:51:07.204664 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.341) 0:00:44.897 ****** 2025-09-19 16:51:07.204675 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204686 | orchestrator | 2025-09-19 16:51:07.204697 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 16:51:07.204708 | orchestrator | Friday 19 September 2025 16:51:02 +0000 (0:00:00.152) 0:00:45.049 ****** 2025-09-19 16:51:07.204718 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204729 | orchestrator | 2025-09-19 16:51:07.204740 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 16:51:07.204750 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.143) 0:00:45.193 ****** 2025-09-19 16:51:07.204761 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204771 | orchestrator | 2025-09-19 16:51:07.204782 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 16:51:07.204793 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.134) 0:00:45.328 ****** 2025-09-19 16:51:07.204803 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204841 | orchestrator | 2025-09-19 16:51:07.204852 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 16:51:07.204863 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.139) 0:00:45.467 ****** 2025-09-19 16:51:07.204874 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204885 | orchestrator | 2025-09-19 16:51:07.204895 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 16:51:07.204906 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.133) 0:00:45.601 ****** 2025-09-19 16:51:07.204916 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204927 | orchestrator | 2025-09-19 16:51:07.204938 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 16:51:07.204949 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.139) 0:00:45.741 ****** 2025-09-19 16:51:07.204959 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.204970 | orchestrator | 2025-09-19 16:51:07.204981 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 16:51:07.204991 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.130) 0:00:45.872 ****** 2025-09-19 16:51:07.205002 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205012 | orchestrator | 2025-09-19 16:51:07.205023 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 16:51:07.205034 | orchestrator | Friday 19 September 2025 16:51:03 +0000 (0:00:00.141) 0:00:46.014 ****** 2025-09-19 16:51:07.205060 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205074 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205085 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205096 | orchestrator | 2025-09-19 16:51:07.205107 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 16:51:07.205118 | orchestrator | Friday 19 September 2025 16:51:04 +0000 (0:00:00.164) 0:00:46.178 ****** 2025-09-19 16:51:07.205129 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205161 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205172 | orchestrator | 2025-09-19 16:51:07.205182 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 16:51:07.205193 | orchestrator | Friday 19 September 2025 16:51:04 +0000 (0:00:00.158) 0:00:46.337 ****** 2025-09-19 16:51:07.205204 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205215 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205226 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205236 | orchestrator | 2025-09-19 16:51:07.205247 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 16:51:07.205258 | orchestrator | Friday 19 September 2025 16:51:04 +0000 (0:00:00.149) 0:00:46.486 ****** 2025-09-19 16:51:07.205268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205290 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205301 | orchestrator | 2025-09-19 16:51:07.205311 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 16:51:07.205341 | orchestrator | Friday 19 September 2025 16:51:04 +0000 (0:00:00.360) 0:00:46.847 ****** 2025-09-19 16:51:07.205353 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205364 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205375 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205385 | orchestrator | 2025-09-19 16:51:07.205396 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 16:51:07.205407 | orchestrator | Friday 19 September 2025 16:51:04 +0000 (0:00:00.165) 0:00:47.012 ****** 2025-09-19 16:51:07.205418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205439 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205451 | orchestrator | 2025-09-19 16:51:07.205461 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 16:51:07.205472 | orchestrator | Friday 19 September 2025 16:51:05 +0000 (0:00:00.134) 0:00:47.147 ****** 2025-09-19 16:51:07.205483 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205504 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205515 | orchestrator | 2025-09-19 16:51:07.205526 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 16:51:07.205536 | orchestrator | Friday 19 September 2025 16:51:05 +0000 (0:00:00.154) 0:00:47.302 ****** 2025-09-19 16:51:07.205547 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205564 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205575 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205586 | orchestrator | 2025-09-19 16:51:07.205603 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 16:51:07.205614 | orchestrator | Friday 19 September 2025 16:51:05 +0000 (0:00:00.174) 0:00:47.477 ****** 2025-09-19 16:51:07.205625 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:07.205635 | orchestrator | 2025-09-19 16:51:07.205646 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 16:51:07.205657 | orchestrator | Friday 19 September 2025 16:51:05 +0000 (0:00:00.566) 0:00:48.044 ****** 2025-09-19 16:51:07.205667 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:07.205678 | orchestrator | 2025-09-19 16:51:07.205689 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 16:51:07.205699 | orchestrator | Friday 19 September 2025 16:51:06 +0000 (0:00:00.563) 0:00:48.607 ****** 2025-09-19 16:51:07.205710 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:07.205721 | orchestrator | 2025-09-19 16:51:07.205732 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 16:51:07.205742 | orchestrator | Friday 19 September 2025 16:51:06 +0000 (0:00:00.162) 0:00:48.769 ****** 2025-09-19 16:51:07.205753 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'vg_name': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}) 2025-09-19 16:51:07.205764 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'vg_name': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}) 2025-09-19 16:51:07.205775 | orchestrator | 2025-09-19 16:51:07.205849 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 16:51:07.205861 | orchestrator | Friday 19 September 2025 16:51:06 +0000 (0:00:00.192) 0:00:48.962 ****** 2025-09-19 16:51:07.205871 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205893 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:07.205904 | orchestrator | 2025-09-19 16:51:07.205915 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 16:51:07.205926 | orchestrator | Friday 19 September 2025 16:51:07 +0000 (0:00:00.160) 0:00:49.122 ****** 2025-09-19 16:51:07.205936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:07.205947 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:07.205966 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:13.369092 | orchestrator | 2025-09-19 16:51:13.369211 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 16:51:13.369229 | orchestrator | Friday 19 September 2025 16:51:07 +0000 (0:00:00.166) 0:00:49.288 ****** 2025-09-19 16:51:13.369242 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'})  2025-09-19 16:51:13.369255 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'})  2025-09-19 16:51:13.369266 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:13.369278 | orchestrator | 2025-09-19 16:51:13.369289 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 16:51:13.369300 | orchestrator | Friday 19 September 2025 16:51:07 +0000 (0:00:00.158) 0:00:49.447 ****** 2025-09-19 16:51:13.369337 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 16:51:13.369348 | orchestrator |  "lvm_report": { 2025-09-19 16:51:13.369361 | orchestrator |  "lv": [ 2025-09-19 16:51:13.369372 | orchestrator |  { 2025-09-19 16:51:13.369383 | orchestrator |  "lv_name": "osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2", 2025-09-19 16:51:13.369395 | orchestrator |  "vg_name": "ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2" 2025-09-19 16:51:13.369405 | orchestrator |  }, 2025-09-19 16:51:13.369416 | orchestrator |  { 2025-09-19 16:51:13.369427 | orchestrator |  "lv_name": "osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7", 2025-09-19 16:51:13.369437 | orchestrator |  "vg_name": "ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7" 2025-09-19 16:51:13.369448 | orchestrator |  } 2025-09-19 16:51:13.369459 | orchestrator |  ], 2025-09-19 16:51:13.369469 | orchestrator |  "pv": [ 2025-09-19 16:51:13.369480 | orchestrator |  { 2025-09-19 16:51:13.369490 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 16:51:13.369501 | orchestrator |  "vg_name": "ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2" 2025-09-19 16:51:13.369512 | orchestrator |  }, 2025-09-19 16:51:13.369522 | orchestrator |  { 2025-09-19 16:51:13.369533 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 16:51:13.369544 | orchestrator |  "vg_name": "ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7" 2025-09-19 16:51:13.369554 | orchestrator |  } 2025-09-19 16:51:13.369565 | orchestrator |  ] 2025-09-19 16:51:13.369575 | orchestrator |  } 2025-09-19 16:51:13.369586 | orchestrator | } 2025-09-19 16:51:13.369597 | orchestrator | 2025-09-19 16:51:13.369608 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-19 16:51:13.369619 | orchestrator | 2025-09-19 16:51:13.369631 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 16:51:13.369644 | orchestrator | Friday 19 September 2025 16:51:07 +0000 (0:00:00.580) 0:00:50.027 ****** 2025-09-19 16:51:13.369655 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-19 16:51:13.369667 | orchestrator | 2025-09-19 16:51:13.369680 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-19 16:51:13.369692 | orchestrator | Friday 19 September 2025 16:51:08 +0000 (0:00:00.235) 0:00:50.262 ****** 2025-09-19 16:51:13.369704 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:13.369716 | orchestrator | 2025-09-19 16:51:13.369728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.369740 | orchestrator | Friday 19 September 2025 16:51:08 +0000 (0:00:00.229) 0:00:50.492 ****** 2025-09-19 16:51:13.369752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-19 16:51:13.369764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-19 16:51:13.369777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-19 16:51:13.369789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-19 16:51:13.369801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-19 16:51:13.369840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-19 16:51:13.369853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-19 16:51:13.369865 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-19 16:51:13.369877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-19 16:51:13.369888 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-19 16:51:13.369901 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-19 16:51:13.369921 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-19 16:51:13.369933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-19 16:51:13.369945 | orchestrator | 2025-09-19 16:51:13.369957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.369970 | orchestrator | Friday 19 September 2025 16:51:08 +0000 (0:00:00.433) 0:00:50.925 ****** 2025-09-19 16:51:13.369981 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.369996 | orchestrator | 2025-09-19 16:51:13.370007 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370067 | orchestrator | Friday 19 September 2025 16:51:09 +0000 (0:00:00.200) 0:00:51.125 ****** 2025-09-19 16:51:13.370079 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370090 | orchestrator | 2025-09-19 16:51:13.370101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370130 | orchestrator | Friday 19 September 2025 16:51:09 +0000 (0:00:00.205) 0:00:51.331 ****** 2025-09-19 16:51:13.370141 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370152 | orchestrator | 2025-09-19 16:51:13.370163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370174 | orchestrator | Friday 19 September 2025 16:51:09 +0000 (0:00:00.195) 0:00:51.527 ****** 2025-09-19 16:51:13.370185 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370195 | orchestrator | 2025-09-19 16:51:13.370206 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370217 | orchestrator | Friday 19 September 2025 16:51:09 +0000 (0:00:00.197) 0:00:51.724 ****** 2025-09-19 16:51:13.370228 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370239 | orchestrator | 2025-09-19 16:51:13.370250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370261 | orchestrator | Friday 19 September 2025 16:51:09 +0000 (0:00:00.205) 0:00:51.930 ****** 2025-09-19 16:51:13.370271 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370282 | orchestrator | 2025-09-19 16:51:13.370293 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370304 | orchestrator | Friday 19 September 2025 16:51:10 +0000 (0:00:00.551) 0:00:52.481 ****** 2025-09-19 16:51:13.370315 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370325 | orchestrator | 2025-09-19 16:51:13.370336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370347 | orchestrator | Friday 19 September 2025 16:51:10 +0000 (0:00:00.201) 0:00:52.683 ****** 2025-09-19 16:51:13.370358 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:13.370369 | orchestrator | 2025-09-19 16:51:13.370380 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370391 | orchestrator | Friday 19 September 2025 16:51:10 +0000 (0:00:00.225) 0:00:52.908 ****** 2025-09-19 16:51:13.370401 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf) 2025-09-19 16:51:13.370463 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf) 2025-09-19 16:51:13.370475 | orchestrator | 2025-09-19 16:51:13.370486 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370497 | orchestrator | Friday 19 September 2025 16:51:11 +0000 (0:00:00.446) 0:00:53.354 ****** 2025-09-19 16:51:13.370507 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e) 2025-09-19 16:51:13.370518 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e) 2025-09-19 16:51:13.370529 | orchestrator | 2025-09-19 16:51:13.370540 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370550 | orchestrator | Friday 19 September 2025 16:51:11 +0000 (0:00:00.429) 0:00:53.783 ****** 2025-09-19 16:51:13.370574 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1) 2025-09-19 16:51:13.370586 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1) 2025-09-19 16:51:13.370596 | orchestrator | 2025-09-19 16:51:13.370607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370618 | orchestrator | Friday 19 September 2025 16:51:12 +0000 (0:00:00.464) 0:00:54.248 ****** 2025-09-19 16:51:13.370628 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5) 2025-09-19 16:51:13.370639 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5) 2025-09-19 16:51:13.370650 | orchestrator | 2025-09-19 16:51:13.370661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-19 16:51:13.370671 | orchestrator | Friday 19 September 2025 16:51:12 +0000 (0:00:00.435) 0:00:54.683 ****** 2025-09-19 16:51:13.370682 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-19 16:51:13.370692 | orchestrator | 2025-09-19 16:51:13.370703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:13.370713 | orchestrator | Friday 19 September 2025 16:51:12 +0000 (0:00:00.352) 0:00:55.036 ****** 2025-09-19 16:51:13.370724 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-19 16:51:13.370734 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-19 16:51:13.370745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-19 16:51:13.370756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-19 16:51:13.370766 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-19 16:51:13.370777 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-19 16:51:13.370787 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-19 16:51:13.370798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-19 16:51:13.370844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-19 16:51:13.370868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-19 16:51:13.370886 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-19 16:51:13.370913 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-19 16:51:22.468209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-19 16:51:22.468350 | orchestrator | 2025-09-19 16:51:22.468378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468397 | orchestrator | Friday 19 September 2025 16:51:13 +0000 (0:00:00.413) 0:00:55.450 ****** 2025-09-19 16:51:22.468416 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468436 | orchestrator | 2025-09-19 16:51:22.468456 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468475 | orchestrator | Friday 19 September 2025 16:51:13 +0000 (0:00:00.207) 0:00:55.657 ****** 2025-09-19 16:51:22.468493 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468509 | orchestrator | 2025-09-19 16:51:22.468520 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468531 | orchestrator | Friday 19 September 2025 16:51:13 +0000 (0:00:00.197) 0:00:55.854 ****** 2025-09-19 16:51:22.468542 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468553 | orchestrator | 2025-09-19 16:51:22.468564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468601 | orchestrator | Friday 19 September 2025 16:51:14 +0000 (0:00:00.575) 0:00:56.430 ****** 2025-09-19 16:51:22.468612 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468623 | orchestrator | 2025-09-19 16:51:22.468634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468644 | orchestrator | Friday 19 September 2025 16:51:14 +0000 (0:00:00.224) 0:00:56.655 ****** 2025-09-19 16:51:22.468655 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468665 | orchestrator | 2025-09-19 16:51:22.468676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468687 | orchestrator | Friday 19 September 2025 16:51:14 +0000 (0:00:00.199) 0:00:56.854 ****** 2025-09-19 16:51:22.468697 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468708 | orchestrator | 2025-09-19 16:51:22.468719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468731 | orchestrator | Friday 19 September 2025 16:51:14 +0000 (0:00:00.206) 0:00:57.060 ****** 2025-09-19 16:51:22.468743 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468755 | orchestrator | 2025-09-19 16:51:22.468767 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468779 | orchestrator | Friday 19 September 2025 16:51:15 +0000 (0:00:00.188) 0:00:57.249 ****** 2025-09-19 16:51:22.468791 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468803 | orchestrator | 2025-09-19 16:51:22.468844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468857 | orchestrator | Friday 19 September 2025 16:51:15 +0000 (0:00:00.194) 0:00:57.443 ****** 2025-09-19 16:51:22.468869 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-19 16:51:22.468882 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-19 16:51:22.468911 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-19 16:51:22.468924 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-19 16:51:22.468936 | orchestrator | 2025-09-19 16:51:22.468948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.468960 | orchestrator | Friday 19 September 2025 16:51:16 +0000 (0:00:00.658) 0:00:58.101 ****** 2025-09-19 16:51:22.468972 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.468985 | orchestrator | 2025-09-19 16:51:22.468997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.469009 | orchestrator | Friday 19 September 2025 16:51:16 +0000 (0:00:00.213) 0:00:58.315 ****** 2025-09-19 16:51:22.469021 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469033 | orchestrator | 2025-09-19 16:51:22.469046 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.469059 | orchestrator | Friday 19 September 2025 16:51:16 +0000 (0:00:00.205) 0:00:58.520 ****** 2025-09-19 16:51:22.469071 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469083 | orchestrator | 2025-09-19 16:51:22.469094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-19 16:51:22.469104 | orchestrator | Friday 19 September 2025 16:51:16 +0000 (0:00:00.203) 0:00:58.724 ****** 2025-09-19 16:51:22.469115 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469126 | orchestrator | 2025-09-19 16:51:22.469141 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-19 16:51:22.469160 | orchestrator | Friday 19 September 2025 16:51:16 +0000 (0:00:00.192) 0:00:58.916 ****** 2025-09-19 16:51:22.469177 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469202 | orchestrator | 2025-09-19 16:51:22.469227 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-19 16:51:22.469246 | orchestrator | Friday 19 September 2025 16:51:17 +0000 (0:00:00.411) 0:00:59.328 ****** 2025-09-19 16:51:22.469265 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '4de995f9-e371-53ec-a5e6-95298d442fa2'}}) 2025-09-19 16:51:22.469283 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}}) 2025-09-19 16:51:22.469313 | orchestrator | 2025-09-19 16:51:22.469324 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-19 16:51:22.469335 | orchestrator | Friday 19 September 2025 16:51:17 +0000 (0:00:00.217) 0:00:59.545 ****** 2025-09-19 16:51:22.469347 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'}) 2025-09-19 16:51:22.469359 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}) 2025-09-19 16:51:22.469370 | orchestrator | 2025-09-19 16:51:22.469380 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-19 16:51:22.469410 | orchestrator | Friday 19 September 2025 16:51:19 +0000 (0:00:01.906) 0:01:01.451 ****** 2025-09-19 16:51:22.469421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:22.469433 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:22.469444 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469455 | orchestrator | 2025-09-19 16:51:22.469468 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-19 16:51:22.469487 | orchestrator | Friday 19 September 2025 16:51:19 +0000 (0:00:00.155) 0:01:01.607 ****** 2025-09-19 16:51:22.469505 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'}) 2025-09-19 16:51:22.469523 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}) 2025-09-19 16:51:22.469552 | orchestrator | 2025-09-19 16:51:22.469571 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-19 16:51:22.469590 | orchestrator | Friday 19 September 2025 16:51:20 +0000 (0:00:01.345) 0:01:02.952 ****** 2025-09-19 16:51:22.469608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:22.469628 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:22.469646 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469665 | orchestrator | 2025-09-19 16:51:22.469683 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-19 16:51:22.469706 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.188) 0:01:03.141 ****** 2025-09-19 16:51:22.469737 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469762 | orchestrator | 2025-09-19 16:51:22.469780 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-19 16:51:22.469800 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.136) 0:01:03.277 ****** 2025-09-19 16:51:22.469845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:22.469873 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:22.469885 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469896 | orchestrator | 2025-09-19 16:51:22.469907 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-19 16:51:22.469917 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.177) 0:01:03.455 ****** 2025-09-19 16:51:22.469928 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.469949 | orchestrator | 2025-09-19 16:51:22.469960 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-19 16:51:22.469971 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.140) 0:01:03.595 ****** 2025-09-19 16:51:22.469981 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:22.469992 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:22.470003 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.470014 | orchestrator | 2025-09-19 16:51:22.470078 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-19 16:51:22.470090 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.153) 0:01:03.749 ****** 2025-09-19 16:51:22.470100 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.470111 | orchestrator | 2025-09-19 16:51:22.470122 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-19 16:51:22.470133 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.145) 0:01:03.894 ****** 2025-09-19 16:51:22.470144 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:22.470155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:22.470166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:22.470176 | orchestrator | 2025-09-19 16:51:22.470187 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-19 16:51:22.470198 | orchestrator | Friday 19 September 2025 16:51:21 +0000 (0:00:00.156) 0:01:04.051 ****** 2025-09-19 16:51:22.470209 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:22.470220 | orchestrator | 2025-09-19 16:51:22.470231 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-19 16:51:22.470241 | orchestrator | Friday 19 September 2025 16:51:22 +0000 (0:00:00.344) 0:01:04.395 ****** 2025-09-19 16:51:22.470264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:28.646936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:28.647073 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647098 | orchestrator | 2025-09-19 16:51:28.647114 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-19 16:51:28.647134 | orchestrator | Friday 19 September 2025 16:51:22 +0000 (0:00:00.162) 0:01:04.558 ****** 2025-09-19 16:51:28.647155 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:28.647175 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:28.647187 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647200 | orchestrator | 2025-09-19 16:51:28.647220 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-19 16:51:28.647239 | orchestrator | Friday 19 September 2025 16:51:22 +0000 (0:00:00.154) 0:01:04.712 ****** 2025-09-19 16:51:28.647258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:28.647271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:28.647282 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647329 | orchestrator | 2025-09-19 16:51:28.647348 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-19 16:51:28.647359 | orchestrator | Friday 19 September 2025 16:51:22 +0000 (0:00:00.160) 0:01:04.873 ****** 2025-09-19 16:51:28.647370 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647381 | orchestrator | 2025-09-19 16:51:28.647394 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-19 16:51:28.647406 | orchestrator | Friday 19 September 2025 16:51:22 +0000 (0:00:00.147) 0:01:05.020 ****** 2025-09-19 16:51:28.647419 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647430 | orchestrator | 2025-09-19 16:51:28.647443 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-19 16:51:28.647455 | orchestrator | Friday 19 September 2025 16:51:23 +0000 (0:00:00.141) 0:01:05.162 ****** 2025-09-19 16:51:28.647467 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647478 | orchestrator | 2025-09-19 16:51:28.647490 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-19 16:51:28.647503 | orchestrator | Friday 19 September 2025 16:51:23 +0000 (0:00:00.161) 0:01:05.323 ****** 2025-09-19 16:51:28.647515 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:51:28.647527 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-19 16:51:28.647539 | orchestrator | } 2025-09-19 16:51:28.647552 | orchestrator | 2025-09-19 16:51:28.647564 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-19 16:51:28.647576 | orchestrator | Friday 19 September 2025 16:51:23 +0000 (0:00:00.153) 0:01:05.476 ****** 2025-09-19 16:51:28.647588 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:51:28.647600 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-19 16:51:28.647612 | orchestrator | } 2025-09-19 16:51:28.647624 | orchestrator | 2025-09-19 16:51:28.647636 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-19 16:51:28.647649 | orchestrator | Friday 19 September 2025 16:51:23 +0000 (0:00:00.201) 0:01:05.678 ****** 2025-09-19 16:51:28.647661 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:51:28.647673 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-19 16:51:28.647686 | orchestrator | } 2025-09-19 16:51:28.647698 | orchestrator | 2025-09-19 16:51:28.647710 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-19 16:51:28.647722 | orchestrator | Friday 19 September 2025 16:51:23 +0000 (0:00:00.134) 0:01:05.813 ****** 2025-09-19 16:51:28.647734 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:28.647748 | orchestrator | 2025-09-19 16:51:28.647760 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-19 16:51:28.647771 | orchestrator | Friday 19 September 2025 16:51:24 +0000 (0:00:00.527) 0:01:06.340 ****** 2025-09-19 16:51:28.647781 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:28.647792 | orchestrator | 2025-09-19 16:51:28.647802 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-19 16:51:28.647840 | orchestrator | Friday 19 September 2025 16:51:24 +0000 (0:00:00.511) 0:01:06.852 ****** 2025-09-19 16:51:28.647853 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:28.647864 | orchestrator | 2025-09-19 16:51:28.647875 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-19 16:51:28.647885 | orchestrator | Friday 19 September 2025 16:51:25 +0000 (0:00:00.711) 0:01:07.563 ****** 2025-09-19 16:51:28.647896 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:28.647906 | orchestrator | 2025-09-19 16:51:28.647917 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-19 16:51:28.647928 | orchestrator | Friday 19 September 2025 16:51:25 +0000 (0:00:00.142) 0:01:07.705 ****** 2025-09-19 16:51:28.647938 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.647949 | orchestrator | 2025-09-19 16:51:28.647959 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-19 16:51:28.647970 | orchestrator | Friday 19 September 2025 16:51:25 +0000 (0:00:00.127) 0:01:07.833 ****** 2025-09-19 16:51:28.647989 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648000 | orchestrator | 2025-09-19 16:51:28.648011 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-19 16:51:28.648022 | orchestrator | Friday 19 September 2025 16:51:25 +0000 (0:00:00.102) 0:01:07.935 ****** 2025-09-19 16:51:28.648032 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:51:28.648043 | orchestrator |  "vgs_report": { 2025-09-19 16:51:28.648054 | orchestrator |  "vg": [] 2025-09-19 16:51:28.648084 | orchestrator |  } 2025-09-19 16:51:28.648096 | orchestrator | } 2025-09-19 16:51:28.648107 | orchestrator | 2025-09-19 16:51:28.648118 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-19 16:51:28.648129 | orchestrator | Friday 19 September 2025 16:51:25 +0000 (0:00:00.152) 0:01:08.088 ****** 2025-09-19 16:51:28.648139 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648150 | orchestrator | 2025-09-19 16:51:28.648161 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-19 16:51:28.648171 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.150) 0:01:08.238 ****** 2025-09-19 16:51:28.648182 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648193 | orchestrator | 2025-09-19 16:51:28.648203 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-19 16:51:28.648214 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.145) 0:01:08.384 ****** 2025-09-19 16:51:28.648224 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648235 | orchestrator | 2025-09-19 16:51:28.648246 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-19 16:51:28.648256 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.142) 0:01:08.526 ****** 2025-09-19 16:51:28.648267 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648277 | orchestrator | 2025-09-19 16:51:28.648288 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-19 16:51:28.648317 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.140) 0:01:08.667 ****** 2025-09-19 16:51:28.648329 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648339 | orchestrator | 2025-09-19 16:51:28.648350 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-19 16:51:28.648361 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.142) 0:01:08.809 ****** 2025-09-19 16:51:28.648371 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648382 | orchestrator | 2025-09-19 16:51:28.648393 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-19 16:51:28.648403 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.130) 0:01:08.939 ****** 2025-09-19 16:51:28.648414 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648425 | orchestrator | 2025-09-19 16:51:28.648435 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-19 16:51:28.648446 | orchestrator | Friday 19 September 2025 16:51:26 +0000 (0:00:00.139) 0:01:09.079 ****** 2025-09-19 16:51:28.648457 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648467 | orchestrator | 2025-09-19 16:51:28.648478 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-19 16:51:28.648489 | orchestrator | Friday 19 September 2025 16:51:27 +0000 (0:00:00.137) 0:01:09.216 ****** 2025-09-19 16:51:28.648499 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648510 | orchestrator | 2025-09-19 16:51:28.648521 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-19 16:51:28.648536 | orchestrator | Friday 19 September 2025 16:51:27 +0000 (0:00:00.331) 0:01:09.547 ****** 2025-09-19 16:51:28.648547 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648558 | orchestrator | 2025-09-19 16:51:28.648568 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-19 16:51:28.648579 | orchestrator | Friday 19 September 2025 16:51:27 +0000 (0:00:00.144) 0:01:09.692 ****** 2025-09-19 16:51:28.648590 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648608 | orchestrator | 2025-09-19 16:51:28.648618 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-19 16:51:28.648629 | orchestrator | Friday 19 September 2025 16:51:27 +0000 (0:00:00.161) 0:01:09.854 ****** 2025-09-19 16:51:28.648640 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648650 | orchestrator | 2025-09-19 16:51:28.648661 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-19 16:51:28.648672 | orchestrator | Friday 19 September 2025 16:51:27 +0000 (0:00:00.140) 0:01:09.994 ****** 2025-09-19 16:51:28.648682 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648693 | orchestrator | 2025-09-19 16:51:28.648704 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-19 16:51:28.648714 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.141) 0:01:10.135 ****** 2025-09-19 16:51:28.648725 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648736 | orchestrator | 2025-09-19 16:51:28.648746 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-19 16:51:28.648757 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.147) 0:01:10.283 ****** 2025-09-19 16:51:28.648768 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:28.648779 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:28.648789 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648800 | orchestrator | 2025-09-19 16:51:28.648832 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-19 16:51:28.648845 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.156) 0:01:10.440 ****** 2025-09-19 16:51:28.648856 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:28.648867 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:28.648877 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:28.648888 | orchestrator | 2025-09-19 16:51:28.648899 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-19 16:51:28.648909 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.151) 0:01:10.591 ****** 2025-09-19 16:51:28.648927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.607638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.607757 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.607787 | orchestrator | 2025-09-19 16:51:31.607806 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-19 16:51:31.607896 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.143) 0:01:10.735 ****** 2025-09-19 16:51:31.607915 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.607932 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.607949 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.607966 | orchestrator | 2025-09-19 16:51:31.607984 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-19 16:51:31.608001 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.152) 0:01:10.888 ****** 2025-09-19 16:51:31.608020 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608073 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608110 | orchestrator | 2025-09-19 16:51:31.608127 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-19 16:51:31.608145 | orchestrator | Friday 19 September 2025 16:51:28 +0000 (0:00:00.159) 0:01:11.047 ****** 2025-09-19 16:51:31.608162 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608180 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608197 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608215 | orchestrator | 2025-09-19 16:51:31.608254 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-19 16:51:31.608273 | orchestrator | Friday 19 September 2025 16:51:29 +0000 (0:00:00.154) 0:01:11.202 ****** 2025-09-19 16:51:31.608291 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608310 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608329 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608347 | orchestrator | 2025-09-19 16:51:31.608366 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-19 16:51:31.608384 | orchestrator | Friday 19 September 2025 16:51:29 +0000 (0:00:00.347) 0:01:11.549 ****** 2025-09-19 16:51:31.608397 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608421 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608433 | orchestrator | 2025-09-19 16:51:31.608445 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-19 16:51:31.608458 | orchestrator | Friday 19 September 2025 16:51:29 +0000 (0:00:00.156) 0:01:11.706 ****** 2025-09-19 16:51:31.608470 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:31.608482 | orchestrator | 2025-09-19 16:51:31.608494 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-19 16:51:31.608506 | orchestrator | Friday 19 September 2025 16:51:30 +0000 (0:00:00.490) 0:01:12.197 ****** 2025-09-19 16:51:31.608518 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:31.608530 | orchestrator | 2025-09-19 16:51:31.608540 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-19 16:51:31.608551 | orchestrator | Friday 19 September 2025 16:51:30 +0000 (0:00:00.483) 0:01:12.681 ****** 2025-09-19 16:51:31.608561 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:31.608572 | orchestrator | 2025-09-19 16:51:31.608582 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-19 16:51:31.608593 | orchestrator | Friday 19 September 2025 16:51:30 +0000 (0:00:00.189) 0:01:12.870 ****** 2025-09-19 16:51:31.608603 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'vg_name': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'}) 2025-09-19 16:51:31.608615 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'vg_name': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}) 2025-09-19 16:51:31.608626 | orchestrator | 2025-09-19 16:51:31.608636 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-19 16:51:31.608662 | orchestrator | Friday 19 September 2025 16:51:30 +0000 (0:00:00.196) 0:01:13.067 ****** 2025-09-19 16:51:31.608694 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608705 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608727 | orchestrator | 2025-09-19 16:51:31.608737 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-19 16:51:31.608748 | orchestrator | Friday 19 September 2025 16:51:31 +0000 (0:00:00.151) 0:01:13.218 ****** 2025-09-19 16:51:31.608758 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608769 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608781 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608791 | orchestrator | 2025-09-19 16:51:31.608802 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-19 16:51:31.608839 | orchestrator | Friday 19 September 2025 16:51:31 +0000 (0:00:00.153) 0:01:13.372 ****** 2025-09-19 16:51:31.608851 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'})  2025-09-19 16:51:31.608862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'})  2025-09-19 16:51:31.608872 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:31.608883 | orchestrator | 2025-09-19 16:51:31.608893 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-19 16:51:31.608904 | orchestrator | Friday 19 September 2025 16:51:31 +0000 (0:00:00.157) 0:01:13.529 ****** 2025-09-19 16:51:31.608914 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 16:51:31.608925 | orchestrator |  "lvm_report": { 2025-09-19 16:51:31.608936 | orchestrator |  "lv": [ 2025-09-19 16:51:31.608946 | orchestrator |  { 2025-09-19 16:51:31.608957 | orchestrator |  "lv_name": "osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2", 2025-09-19 16:51:31.608976 | orchestrator |  "vg_name": "ceph-4de995f9-e371-53ec-a5e6-95298d442fa2" 2025-09-19 16:51:31.608987 | orchestrator |  }, 2025-09-19 16:51:31.608997 | orchestrator |  { 2025-09-19 16:51:31.609008 | orchestrator |  "lv_name": "osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38", 2025-09-19 16:51:31.609019 | orchestrator |  "vg_name": "ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38" 2025-09-19 16:51:31.609029 | orchestrator |  } 2025-09-19 16:51:31.609040 | orchestrator |  ], 2025-09-19 16:51:31.609050 | orchestrator |  "pv": [ 2025-09-19 16:51:31.609061 | orchestrator |  { 2025-09-19 16:51:31.609072 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-19 16:51:31.609082 | orchestrator |  "vg_name": "ceph-4de995f9-e371-53ec-a5e6-95298d442fa2" 2025-09-19 16:51:31.609093 | orchestrator |  }, 2025-09-19 16:51:31.609103 | orchestrator |  { 2025-09-19 16:51:31.609114 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-19 16:51:31.609124 | orchestrator |  "vg_name": "ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38" 2025-09-19 16:51:31.609135 | orchestrator |  } 2025-09-19 16:51:31.609145 | orchestrator |  ] 2025-09-19 16:51:31.609156 | orchestrator |  } 2025-09-19 16:51:31.609166 | orchestrator | } 2025-09-19 16:51:31.609177 | orchestrator | 2025-09-19 16:51:31.609188 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:51:31.609207 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 16:51:31.609217 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 16:51:31.609228 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-19 16:51:31.609239 | orchestrator | 2025-09-19 16:51:31.609249 | orchestrator | 2025-09-19 16:51:31.609260 | orchestrator | 2025-09-19 16:51:31.609270 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:51:31.609281 | orchestrator | Friday 19 September 2025 16:51:31 +0000 (0:00:00.144) 0:01:13.674 ****** 2025-09-19 16:51:31.609291 | orchestrator | =============================================================================== 2025-09-19 16:51:31.609302 | orchestrator | Create block VGs -------------------------------------------------------- 5.78s 2025-09-19 16:51:31.609312 | orchestrator | Create block LVs -------------------------------------------------------- 4.28s 2025-09-19 16:51:31.609323 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.87s 2025-09-19 16:51:31.609333 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2025-09-19 16:51:31.609344 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2025-09-19 16:51:31.609354 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.59s 2025-09-19 16:51:31.609364 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.59s 2025-09-19 16:51:31.609375 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-09-19 16:51:31.609392 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-09-19 16:51:31.955217 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-09-19 16:51:31.955324 | orchestrator | Print LVM report data --------------------------------------------------- 1.01s 2025-09-19 16:51:31.955338 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-09-19 16:51:31.955349 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.80s 2025-09-19 16:51:31.955359 | orchestrator | Add known partitions to the list of available block devices ------------- 0.76s 2025-09-19 16:51:31.955370 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-09-19 16:51:31.955380 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.73s 2025-09-19 16:51:31.955391 | orchestrator | Create DB LVs for ceph_db_devices --------------------------------------- 0.71s 2025-09-19 16:51:31.955401 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.70s 2025-09-19 16:51:31.955412 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2025-09-19 16:51:31.955422 | orchestrator | Get initial list of available block devices ----------------------------- 0.68s 2025-09-19 16:51:44.105779 | orchestrator | 2025-09-19 16:51:44 | INFO  | Task 40df1926-88c1-4f92-8f67-49ea936edbc7 (facts) was prepared for execution. 2025-09-19 16:51:44.105883 | orchestrator | 2025-09-19 16:51:44 | INFO  | It takes a moment until task 40df1926-88c1-4f92-8f67-49ea936edbc7 (facts) has been started and output is visible here. 2025-09-19 16:51:55.683522 | orchestrator | 2025-09-19 16:51:55.683649 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 16:51:55.683685 | orchestrator | 2025-09-19 16:51:55.683707 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 16:51:55.683747 | orchestrator | Friday 19 September 2025 16:51:47 +0000 (0:00:00.246) 0:00:00.246 ****** 2025-09-19 16:51:55.683765 | orchestrator | ok: [testbed-manager] 2025-09-19 16:51:55.683796 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:51:55.683915 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:51:55.683939 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:51:55.683958 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:51:55.683977 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:55.683996 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:55.684015 | orchestrator | 2025-09-19 16:51:55.684037 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 16:51:55.684057 | orchestrator | Friday 19 September 2025 16:51:48 +0000 (0:00:00.954) 0:00:01.200 ****** 2025-09-19 16:51:55.684090 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:51:55.684104 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:51:55.684118 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:51:55.684130 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:51:55.684142 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:51:55.684154 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:55.684166 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:55.684178 | orchestrator | 2025-09-19 16:51:55.684191 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 16:51:55.684203 | orchestrator | 2025-09-19 16:51:55.684215 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 16:51:55.684228 | orchestrator | Friday 19 September 2025 16:51:49 +0000 (0:00:01.125) 0:00:02.326 ****** 2025-09-19 16:51:55.684240 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:51:55.684253 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:51:55.684265 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:51:55.684277 | orchestrator | ok: [testbed-manager] 2025-09-19 16:51:55.684289 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:51:55.684302 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:51:55.684314 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:51:55.684326 | orchestrator | 2025-09-19 16:51:55.684339 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 16:51:55.684351 | orchestrator | 2025-09-19 16:51:55.684364 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 16:51:55.684377 | orchestrator | Friday 19 September 2025 16:51:54 +0000 (0:00:05.115) 0:00:07.441 ****** 2025-09-19 16:51:55.684389 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:51:55.684402 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:51:55.684414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:51:55.684426 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:51:55.684436 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:51:55.684447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:51:55.684457 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:51:55.684468 | orchestrator | 2025-09-19 16:51:55.684478 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:51:55.684490 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684502 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684513 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684524 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684535 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684546 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684556 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:51:55.684579 | orchestrator | 2025-09-19 16:51:55.684590 | orchestrator | 2025-09-19 16:51:55.684601 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:51:55.684611 | orchestrator | Friday 19 September 2025 16:51:55 +0000 (0:00:00.466) 0:00:07.908 ****** 2025-09-19 16:51:55.684622 | orchestrator | =============================================================================== 2025-09-19 16:51:55.684633 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.12s 2025-09-19 16:51:55.684643 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-09-19 16:51:55.684654 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.95s 2025-09-19 16:51:55.684665 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-09-19 16:52:07.743751 | orchestrator | 2025-09-19 16:52:07 | INFO  | Task 8e5bc83b-151a-4cc9-9051-4e43ddc8f4f8 (frr) was prepared for execution. 2025-09-19 16:52:07.743872 | orchestrator | 2025-09-19 16:52:07 | INFO  | It takes a moment until task 8e5bc83b-151a-4cc9-9051-4e43ddc8f4f8 (frr) has been started and output is visible here. 2025-09-19 16:52:30.915195 | orchestrator | 2025-09-19 16:52:30.915301 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-19 16:52:30.915317 | orchestrator | 2025-09-19 16:52:30.915330 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-19 16:52:30.915343 | orchestrator | Friday 19 September 2025 16:52:11 +0000 (0:00:00.217) 0:00:00.217 ****** 2025-09-19 16:52:30.915355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:52:30.915367 | orchestrator | 2025-09-19 16:52:30.915378 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-19 16:52:30.915389 | orchestrator | Friday 19 September 2025 16:52:11 +0000 (0:00:00.202) 0:00:00.419 ****** 2025-09-19 16:52:30.915401 | orchestrator | changed: [testbed-manager] 2025-09-19 16:52:30.915413 | orchestrator | 2025-09-19 16:52:30.915424 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-19 16:52:30.915435 | orchestrator | Friday 19 September 2025 16:52:12 +0000 (0:00:01.042) 0:00:01.462 ****** 2025-09-19 16:52:30.915445 | orchestrator | changed: [testbed-manager] 2025-09-19 16:52:30.915456 | orchestrator | 2025-09-19 16:52:30.915467 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-19 16:52:30.915479 | orchestrator | Friday 19 September 2025 16:52:21 +0000 (0:00:08.711) 0:00:10.173 ****** 2025-09-19 16:52:30.915489 | orchestrator | ok: [testbed-manager] 2025-09-19 16:52:30.915501 | orchestrator | 2025-09-19 16:52:30.915512 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-19 16:52:30.915523 | orchestrator | Friday 19 September 2025 16:52:22 +0000 (0:00:01.151) 0:00:11.325 ****** 2025-09-19 16:52:30.915534 | orchestrator | changed: [testbed-manager] 2025-09-19 16:52:30.915545 | orchestrator | 2025-09-19 16:52:30.915556 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-19 16:52:30.915566 | orchestrator | Friday 19 September 2025 16:52:23 +0000 (0:00:00.827) 0:00:12.153 ****** 2025-09-19 16:52:30.915577 | orchestrator | ok: [testbed-manager] 2025-09-19 16:52:30.915588 | orchestrator | 2025-09-19 16:52:30.915617 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-19 16:52:30.915630 | orchestrator | Friday 19 September 2025 16:52:24 +0000 (0:00:01.050) 0:00:13.203 ****** 2025-09-19 16:52:30.915641 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:52:30.915652 | orchestrator | 2025-09-19 16:52:30.915663 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-19 16:52:30.915674 | orchestrator | Friday 19 September 2025 16:52:25 +0000 (0:00:00.739) 0:00:13.943 ****** 2025-09-19 16:52:30.915685 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:52:30.915695 | orchestrator | 2025-09-19 16:52:30.915707 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-19 16:52:30.915742 | orchestrator | Friday 19 September 2025 16:52:25 +0000 (0:00:00.148) 0:00:14.092 ****** 2025-09-19 16:52:30.915755 | orchestrator | changed: [testbed-manager] 2025-09-19 16:52:30.915767 | orchestrator | 2025-09-19 16:52:30.915780 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-19 16:52:30.915792 | orchestrator | Friday 19 September 2025 16:52:26 +0000 (0:00:00.916) 0:00:15.009 ****** 2025-09-19 16:52:30.915805 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-19 16:52:30.915817 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-19 16:52:30.915855 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-19 16:52:30.915868 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-19 16:52:30.915880 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-19 16:52:30.915893 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-19 16:52:30.915905 | orchestrator | 2025-09-19 16:52:30.915918 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-19 16:52:30.915930 | orchestrator | Friday 19 September 2025 16:52:28 +0000 (0:00:01.994) 0:00:17.004 ****** 2025-09-19 16:52:30.915943 | orchestrator | ok: [testbed-manager] 2025-09-19 16:52:30.915955 | orchestrator | 2025-09-19 16:52:30.915967 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-19 16:52:30.915980 | orchestrator | Friday 19 September 2025 16:52:29 +0000 (0:00:01.173) 0:00:18.177 ****** 2025-09-19 16:52:30.915992 | orchestrator | changed: [testbed-manager] 2025-09-19 16:52:30.916005 | orchestrator | 2025-09-19 16:52:30.916017 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:52:30.916030 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 16:52:30.916043 | orchestrator | 2025-09-19 16:52:30.916055 | orchestrator | 2025-09-19 16:52:30.916067 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:52:30.916080 | orchestrator | Friday 19 September 2025 16:52:30 +0000 (0:00:01.329) 0:00:19.506 ****** 2025-09-19 16:52:30.916093 | orchestrator | =============================================================================== 2025-09-19 16:52:30.916104 | orchestrator | osism.services.frr : Install frr package -------------------------------- 8.71s 2025-09-19 16:52:30.916115 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 1.99s 2025-09-19 16:52:30.916126 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.33s 2025-09-19 16:52:30.916137 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.17s 2025-09-19 16:52:30.916163 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.15s 2025-09-19 16:52:30.916175 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.05s 2025-09-19 16:52:30.916186 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.04s 2025-09-19 16:52:30.916197 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.92s 2025-09-19 16:52:30.916208 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.83s 2025-09-19 16:52:30.916218 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.74s 2025-09-19 16:52:30.916229 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.20s 2025-09-19 16:52:30.916240 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-09-19 16:52:31.110598 | orchestrator | 2025-09-19 16:52:31.113062 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Fri Sep 19 16:52:31 UTC 2025 2025-09-19 16:52:31.113139 | orchestrator | 2025-09-19 16:52:32.793145 | orchestrator | 2025-09-19 16:52:32 | INFO  | Collection nutshell is prepared for execution 2025-09-19 16:52:32.793246 | orchestrator | 2025-09-19 16:52:32 | INFO  | D [0] - dotfiles 2025-09-19 16:52:42.850874 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [0] - homer 2025-09-19 16:52:42.850990 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [0] - netdata 2025-09-19 16:52:42.851007 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [0] - openstackclient 2025-09-19 16:52:42.851262 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [0] - phpmyadmin 2025-09-19 16:52:42.851363 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [0] - common 2025-09-19 16:52:42.854662 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [1] -- loadbalancer 2025-09-19 16:52:42.855108 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [2] --- opensearch 2025-09-19 16:52:42.855139 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [2] --- mariadb-ng 2025-09-19 16:52:42.855447 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [3] ---- horizon 2025-09-19 16:52:42.855468 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [3] ---- keystone 2025-09-19 16:52:42.856192 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [4] ----- neutron 2025-09-19 16:52:42.856214 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ wait-for-nova 2025-09-19 16:52:42.856227 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [5] ------ octavia 2025-09-19 16:52:42.857263 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- barbican 2025-09-19 16:52:42.857303 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- designate 2025-09-19 16:52:42.857681 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- ironic 2025-09-19 16:52:42.857702 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- placement 2025-09-19 16:52:42.857904 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- magnum 2025-09-19 16:52:42.858585 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [1] -- openvswitch 2025-09-19 16:52:42.858609 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [2] --- ovn 2025-09-19 16:52:42.859182 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [1] -- memcached 2025-09-19 16:52:42.859329 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [1] -- redis 2025-09-19 16:52:42.859342 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [1] -- rabbitmq-ng 2025-09-19 16:52:42.859771 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [0] - kubernetes 2025-09-19 16:52:42.862406 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [1] -- kubeconfig 2025-09-19 16:52:42.862506 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [1] -- copy-kubeconfig 2025-09-19 16:52:42.862526 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [0] - ceph 2025-09-19 16:52:42.864683 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [1] -- ceph-pools 2025-09-19 16:52:42.865016 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [2] --- copy-ceph-keys 2025-09-19 16:52:42.865045 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [3] ---- cephclient 2025-09-19 16:52:42.865065 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-19 16:52:42.865761 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [4] ----- wait-for-keystone 2025-09-19 16:52:42.866120 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-19 16:52:42.866161 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ glance 2025-09-19 16:52:42.866181 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ cinder 2025-09-19 16:52:42.866199 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ nova 2025-09-19 16:52:42.866564 | orchestrator | 2025-09-19 16:52:42 | INFO  | A [4] ----- prometheus 2025-09-19 16:52:42.866587 | orchestrator | 2025-09-19 16:52:42 | INFO  | D [5] ------ grafana 2025-09-19 16:52:43.052762 | orchestrator | 2025-09-19 16:52:43 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-19 16:52:43.052938 | orchestrator | 2025-09-19 16:52:43 | INFO  | Tasks are running in the background 2025-09-19 16:52:46.042316 | orchestrator | 2025-09-19 16:52:46 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-19 16:52:48.146602 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:52:48.146705 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:52:48.147232 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:52:48.147433 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:52:48.148050 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:52:48.148312 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:52:48.148902 | orchestrator | 2025-09-19 16:52:48 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:52:48.148925 | orchestrator | 2025-09-19 16:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:52:51.195525 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:52:51.197419 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:52:51.203353 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:52:51.204429 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:52:51.208373 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:52:51.208768 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:52:51.212579 | orchestrator | 2025-09-19 16:52:51 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:52:51.212613 | orchestrator | 2025-09-19 16:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:52:54.245559 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:52:54.245754 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:52:54.245786 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:52:54.247050 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:52:54.247519 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:52:54.251389 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:52:54.251710 | orchestrator | 2025-09-19 16:52:54 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:52:54.251735 | orchestrator | 2025-09-19 16:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:52:57.580674 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:52:57.580775 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:52:57.580790 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:52:57.580802 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:52:57.580812 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:52:57.580823 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:52:57.580892 | orchestrator | 2025-09-19 16:52:57 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:52:57.580904 | orchestrator | 2025-09-19 16:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:00.541631 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:00.541730 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:00.541743 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:00.541753 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:00.541763 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:00.541773 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:00.541783 | orchestrator | 2025-09-19 16:53:00 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:53:00.541793 | orchestrator | 2025-09-19 16:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:03.604653 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:03.604778 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:03.604794 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:03.607391 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:03.607418 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:03.607952 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:03.615381 | orchestrator | 2025-09-19 16:53:03 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:53:03.615432 | orchestrator | 2025-09-19 16:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:06.740545 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:06.740648 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:06.740662 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:06.740674 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:06.740713 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:06.740724 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:06.740735 | orchestrator | 2025-09-19 16:53:06 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:53:06.740746 | orchestrator | 2025-09-19 16:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:09.801656 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:09.801746 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:09.801757 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:09.801765 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:09.801773 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:09.801781 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:09.801789 | orchestrator | 2025-09-19 16:53:09 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state STARTED 2025-09-19 16:53:09.801797 | orchestrator | 2025-09-19 16:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:12.853523 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:12.853621 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:12.924074 | orchestrator | 2025-09-19 16:53:12.924157 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-19 16:53:12.924168 | orchestrator | 2025-09-19 16:53:12.924177 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-19 16:53:12.924186 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:00.855) 0:00:00.855 ****** 2025-09-19 16:53:12.924194 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:53:12.924203 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:53:12.924211 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:53:12.924219 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:53:12.924227 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:53:12.924235 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:53:12.924243 | orchestrator | changed: [testbed-manager] 2025-09-19 16:53:12.924251 | orchestrator | 2025-09-19 16:53:12.924259 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-19 16:53:12.924267 | orchestrator | Friday 19 September 2025 16:53:00 +0000 (0:00:04.838) 0:00:05.694 ****** 2025-09-19 16:53:12.924276 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 16:53:12.924285 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 16:53:12.924293 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 16:53:12.924301 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 16:53:12.924309 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 16:53:12.924317 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 16:53:12.924325 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 16:53:12.924333 | orchestrator | 2025-09-19 16:53:12.924341 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-19 16:53:12.924350 | orchestrator | Friday 19 September 2025 16:53:01 +0000 (0:00:01.223) 0:00:06.918 ****** 2025-09-19 16:53:12.924369 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.822438', 'end': '2025-09-19 16:53:00.831903', 'delta': '0:00:00.009465', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924399 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.748266', 'end': '2025-09-19 16:53:00.755287', 'delta': '0:00:00.007021', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924409 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.772022', 'end': '2025-09-19 16:53:00.778283', 'delta': '0:00:00.006261', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924439 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.961329', 'end': '2025-09-19 16:53:00.970751', 'delta': '0:00:00.009422', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924449 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:01.013510', 'end': '2025-09-19 16:53:01.021262', 'delta': '0:00:00.007752', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924717 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.832646', 'end': '2025-09-19 16:53:00.836786', 'delta': '0:00:00.004140', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924730 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-19 16:53:00.943020', 'end': '2025-09-19 16:53:00.951063', 'delta': '0:00:00.008043', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-19 16:53:12.924739 | orchestrator | 2025-09-19 16:53:12.924749 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-19 16:53:12.924758 | orchestrator | Friday 19 September 2025 16:53:02 +0000 (0:00:01.560) 0:00:08.478 ****** 2025-09-19 16:53:12.924767 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 16:53:12.924775 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 16:53:12.924783 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 16:53:12.924791 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 16:53:12.924799 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 16:53:12.924807 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-19 16:53:12.924814 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 16:53:12.924822 | orchestrator | 2025-09-19 16:53:12.924857 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-19 16:53:12.924866 | orchestrator | Friday 19 September 2025 16:53:04 +0000 (0:00:01.682) 0:00:10.161 ****** 2025-09-19 16:53:12.924874 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-19 16:53:12.924882 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-19 16:53:12.924890 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-19 16:53:12.924898 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-19 16:53:12.924906 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-19 16:53:12.924913 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-19 16:53:12.924921 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-19 16:53:12.924929 | orchestrator | 2025-09-19 16:53:12.924937 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:53:12.924952 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.924962 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.924970 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.924985 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.924993 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.925001 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.925009 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:53:12.925017 | orchestrator | 2025-09-19 16:53:12.925025 | orchestrator | 2025-09-19 16:53:12.925033 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:53:12.925044 | orchestrator | Friday 19 September 2025 16:53:09 +0000 (0:00:04.758) 0:00:14.920 ****** 2025-09-19 16:53:12.925052 | orchestrator | =============================================================================== 2025-09-19 16:53:12.925060 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.84s 2025-09-19 16:53:12.925068 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.76s 2025-09-19 16:53:12.925077 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.68s 2025-09-19 16:53:12.925084 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.56s 2025-09-19 16:53:12.925092 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.22s 2025-09-19 16:53:12.925100 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:12.925108 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:12.925116 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:12.925125 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:12.925132 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:12.925140 | orchestrator | 2025-09-19 16:53:12 | INFO  | Task 0bb1ffdc-baec-489a-97c3-2218d1c322fe is in state SUCCESS 2025-09-19 16:53:12.925148 | orchestrator | 2025-09-19 16:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:15.967798 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:15.969503 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:15.970214 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:15.970884 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:15.973876 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:15.974298 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:15.975084 | orchestrator | 2025-09-19 16:53:15 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:15.977242 | orchestrator | 2025-09-19 16:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:19.008115 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:19.008247 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:19.008275 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:19.008686 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:19.009596 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:19.011757 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:19.011788 | orchestrator | 2025-09-19 16:53:19 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:19.013057 | orchestrator | 2025-09-19 16:53:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:22.084799 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:22.084951 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:22.085773 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:22.085795 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:22.089502 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:22.091223 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:22.094418 | orchestrator | 2025-09-19 16:53:22 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:22.094654 | orchestrator | 2025-09-19 16:53:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:25.132161 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:25.132272 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:25.132286 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:25.132297 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:25.132307 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:25.132316 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:25.132326 | orchestrator | 2025-09-19 16:53:25 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:25.132336 | orchestrator | 2025-09-19 16:53:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:28.272655 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:28.272760 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:28.272775 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:28.272787 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:28.272798 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:28.272900 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:28.277186 | orchestrator | 2025-09-19 16:53:28 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:28.279113 | orchestrator | 2025-09-19 16:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:31.371648 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:31.374191 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state STARTED 2025-09-19 16:53:31.376765 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:31.378782 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:31.378909 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:31.379648 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:31.380340 | orchestrator | 2025-09-19 16:53:31 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:31.381203 | orchestrator | 2025-09-19 16:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:34.451324 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:34.451427 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task e1ac224e-5996-4936-a966-f42b982b4b08 is in state SUCCESS 2025-09-19 16:53:34.451442 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:34.451453 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:34.451464 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:34.451475 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:34.451486 | orchestrator | 2025-09-19 16:53:34 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:34.451497 | orchestrator | 2025-09-19 16:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:37.484003 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:37.486271 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:37.499509 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:37.499631 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:37.499645 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:37.499657 | orchestrator | 2025-09-19 16:53:37 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:37.499669 | orchestrator | 2025-09-19 16:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:40.535532 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:40.535646 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state STARTED 2025-09-19 16:53:40.535693 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:40.535706 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:40.535716 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:40.535727 | orchestrator | 2025-09-19 16:53:40 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:40.535738 | orchestrator | 2025-09-19 16:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:43.556657 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:43.557541 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task d5b7e459-c22e-4e61-a4e2-ec47eafafdec is in state SUCCESS 2025-09-19 16:53:43.558950 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:43.560386 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:43.561688 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:43.563601 | orchestrator | 2025-09-19 16:53:43 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:43.564157 | orchestrator | 2025-09-19 16:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:46.653884 | orchestrator | 2025-09-19 16:53:46 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:46.653949 | orchestrator | 2025-09-19 16:53:46 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:46.653955 | orchestrator | 2025-09-19 16:53:46 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:46.653959 | orchestrator | 2025-09-19 16:53:46 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:46.653964 | orchestrator | 2025-09-19 16:53:46 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:46.653968 | orchestrator | 2025-09-19 16:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:49.753606 | orchestrator | 2025-09-19 16:53:49 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:49.753789 | orchestrator | 2025-09-19 16:53:49 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:49.755925 | orchestrator | 2025-09-19 16:53:49 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:49.757050 | orchestrator | 2025-09-19 16:53:49 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:49.758104 | orchestrator | 2025-09-19 16:53:49 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:49.758115 | orchestrator | 2025-09-19 16:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:52.797716 | orchestrator | 2025-09-19 16:53:52 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:52.797921 | orchestrator | 2025-09-19 16:53:52 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:52.799102 | orchestrator | 2025-09-19 16:53:52 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:52.800617 | orchestrator | 2025-09-19 16:53:52 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:52.802263 | orchestrator | 2025-09-19 16:53:52 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:52.802282 | orchestrator | 2025-09-19 16:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:55.839679 | orchestrator | 2025-09-19 16:53:55 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:55.839781 | orchestrator | 2025-09-19 16:53:55 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:55.840199 | orchestrator | 2025-09-19 16:53:55 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:55.841273 | orchestrator | 2025-09-19 16:53:55 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:55.842813 | orchestrator | 2025-09-19 16:53:55 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:55.843057 | orchestrator | 2025-09-19 16:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:53:58.903309 | orchestrator | 2025-09-19 16:53:58 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:53:58.904124 | orchestrator | 2025-09-19 16:53:58 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:53:58.907353 | orchestrator | 2025-09-19 16:53:58 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:53:58.908063 | orchestrator | 2025-09-19 16:53:58 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:53:58.909103 | orchestrator | 2025-09-19 16:53:58 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:53:58.909128 | orchestrator | 2025-09-19 16:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:01.941272 | orchestrator | 2025-09-19 16:54:01 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:01.941379 | orchestrator | 2025-09-19 16:54:01 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:01.942201 | orchestrator | 2025-09-19 16:54:01 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state STARTED 2025-09-19 16:54:01.943016 | orchestrator | 2025-09-19 16:54:01 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:54:01.944111 | orchestrator | 2025-09-19 16:54:01 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:01.944166 | orchestrator | 2025-09-19 16:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:05.008203 | orchestrator | 2025-09-19 16:54:05 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:05.009120 | orchestrator | 2025-09-19 16:54:05 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:05.009814 | orchestrator | 2025-09-19 16:54:05 | INFO  | Task 7c9801e7-00f5-42c0-beea-8527f6489fda is in state SUCCESS 2025-09-19 16:54:05.010590 | orchestrator | 2025-09-19 16:54:05.010634 | orchestrator | 2025-09-19 16:54:05.010646 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-19 16:54:05.010657 | orchestrator | 2025-09-19 16:54:05.010668 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-19 16:54:05.010679 | orchestrator | Friday 19 September 2025 16:52:53 +0000 (0:00:00.322) 0:00:00.322 ****** 2025-09-19 16:54:05.011587 | orchestrator | ok: [testbed-manager] => { 2025-09-19 16:54:05.011615 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-19 16:54:05.011628 | orchestrator | } 2025-09-19 16:54:05.011638 | orchestrator | 2025-09-19 16:54:05.011649 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-19 16:54:05.011680 | orchestrator | Friday 19 September 2025 16:52:54 +0000 (0:00:00.142) 0:00:00.465 ****** 2025-09-19 16:54:05.011690 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.011701 | orchestrator | 2025-09-19 16:54:05.011711 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-19 16:54:05.011720 | orchestrator | Friday 19 September 2025 16:52:56 +0000 (0:00:01.913) 0:00:02.378 ****** 2025-09-19 16:54:05.011730 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-19 16:54:05.011742 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-19 16:54:05.011758 | orchestrator | 2025-09-19 16:54:05.011774 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-19 16:54:05.011790 | orchestrator | Friday 19 September 2025 16:52:58 +0000 (0:00:01.966) 0:00:04.345 ****** 2025-09-19 16:54:05.011805 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.011821 | orchestrator | 2025-09-19 16:54:05.011880 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-19 16:54:05.011892 | orchestrator | Friday 19 September 2025 16:53:00 +0000 (0:00:02.792) 0:00:07.137 ****** 2025-09-19 16:54:05.011902 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.011911 | orchestrator | 2025-09-19 16:54:05.011921 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-19 16:54:05.011931 | orchestrator | Friday 19 September 2025 16:53:03 +0000 (0:00:02.983) 0:00:10.120 ****** 2025-09-19 16:54:05.011941 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-19 16:54:05.011951 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.011961 | orchestrator | 2025-09-19 16:54:05.011971 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-19 16:54:05.011983 | orchestrator | Friday 19 September 2025 16:53:29 +0000 (0:00:26.009) 0:00:36.130 ****** 2025-09-19 16:54:05.011999 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012015 | orchestrator | 2025-09-19 16:54:05.012030 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:54:05.012046 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.012063 | orchestrator | 2025-09-19 16:54:05.012079 | orchestrator | 2025-09-19 16:54:05.012096 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:54:05.012113 | orchestrator | Friday 19 September 2025 16:53:32 +0000 (0:00:03.078) 0:00:39.209 ****** 2025-09-19 16:54:05.012130 | orchestrator | =============================================================================== 2025-09-19 16:54:05.012148 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.01s 2025-09-19 16:54:05.012164 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.08s 2025-09-19 16:54:05.012181 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.98s 2025-09-19 16:54:05.012196 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.79s 2025-09-19 16:54:05.012212 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.97s 2025-09-19 16:54:05.012229 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.91s 2025-09-19 16:54:05.012244 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.14s 2025-09-19 16:54:05.012260 | orchestrator | 2025-09-19 16:54:05.012279 | orchestrator | 2025-09-19 16:54:05.012296 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-19 16:54:05.012313 | orchestrator | 2025-09-19 16:54:05.012331 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-19 16:54:05.012348 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:00.229) 0:00:00.229 ****** 2025-09-19 16:54:05.012366 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-19 16:54:05.012400 | orchestrator | 2025-09-19 16:54:05.012418 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-19 16:54:05.012435 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:00.230) 0:00:00.459 ****** 2025-09-19 16:54:05.012453 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-19 16:54:05.012471 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-19 16:54:05.012488 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-19 16:54:05.012505 | orchestrator | 2025-09-19 16:54:05.012523 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-19 16:54:05.012540 | orchestrator | Friday 19 September 2025 16:52:57 +0000 (0:00:02.293) 0:00:02.752 ****** 2025-09-19 16:54:05.012557 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012567 | orchestrator | 2025-09-19 16:54:05.012577 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-19 16:54:05.012587 | orchestrator | Friday 19 September 2025 16:52:59 +0000 (0:00:02.394) 0:00:05.147 ****** 2025-09-19 16:54:05.012610 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-19 16:54:05.012621 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.012630 | orchestrator | 2025-09-19 16:54:05.012640 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-19 16:54:05.012650 | orchestrator | Friday 19 September 2025 16:53:34 +0000 (0:00:34.492) 0:00:39.639 ****** 2025-09-19 16:54:05.012660 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012670 | orchestrator | 2025-09-19 16:54:05.012680 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-19 16:54:05.012690 | orchestrator | Friday 19 September 2025 16:53:35 +0000 (0:00:01.379) 0:00:41.019 ****** 2025-09-19 16:54:05.012699 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.012709 | orchestrator | 2025-09-19 16:54:05.012719 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-19 16:54:05.012764 | orchestrator | Friday 19 September 2025 16:53:36 +0000 (0:00:00.623) 0:00:41.642 ****** 2025-09-19 16:54:05.012775 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012785 | orchestrator | 2025-09-19 16:54:05.012795 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-19 16:54:05.012804 | orchestrator | Friday 19 September 2025 16:53:38 +0000 (0:00:02.173) 0:00:43.816 ****** 2025-09-19 16:54:05.012814 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012823 | orchestrator | 2025-09-19 16:54:05.012853 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-19 16:54:05.012864 | orchestrator | Friday 19 September 2025 16:53:39 +0000 (0:00:01.163) 0:00:44.979 ****** 2025-09-19 16:54:05.012873 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.012883 | orchestrator | 2025-09-19 16:54:05.012892 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-19 16:54:05.012902 | orchestrator | Friday 19 September 2025 16:53:40 +0000 (0:00:01.120) 0:00:46.099 ****** 2025-09-19 16:54:05.012911 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.012921 | orchestrator | 2025-09-19 16:54:05.012931 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:54:05.012941 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.012950 | orchestrator | 2025-09-19 16:54:05.012960 | orchestrator | 2025-09-19 16:54:05.012974 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:54:05.012984 | orchestrator | Friday 19 September 2025 16:53:41 +0000 (0:00:00.488) 0:00:46.588 ****** 2025-09-19 16:54:05.012994 | orchestrator | =============================================================================== 2025-09-19 16:54:05.013003 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 34.49s 2025-09-19 16:54:05.013021 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.39s 2025-09-19 16:54:05.013031 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.29s 2025-09-19 16:54:05.013041 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.17s 2025-09-19 16:54:05.013050 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.38s 2025-09-19 16:54:05.013060 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.16s 2025-09-19 16:54:05.013069 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.12s 2025-09-19 16:54:05.013079 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.62s 2025-09-19 16:54:05.013088 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.49s 2025-09-19 16:54:05.013098 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.23s 2025-09-19 16:54:05.013108 | orchestrator | 2025-09-19 16:54:05.013117 | orchestrator | 2025-09-19 16:54:05.013127 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:54:05.013136 | orchestrator | 2025-09-19 16:54:05.013146 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:54:05.013278 | orchestrator | Friday 19 September 2025 16:52:54 +0000 (0:00:00.597) 0:00:00.597 ****** 2025-09-19 16:54:05.013294 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-19 16:54:05.013304 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-19 16:54:05.013314 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-19 16:54:05.013323 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-19 16:54:05.013333 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-19 16:54:05.013342 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-19 16:54:05.013352 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-19 16:54:05.013361 | orchestrator | 2025-09-19 16:54:05.013371 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-19 16:54:05.013381 | orchestrator | 2025-09-19 16:54:05.013391 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-19 16:54:05.013400 | orchestrator | Friday 19 September 2025 16:52:56 +0000 (0:00:01.837) 0:00:02.434 ****** 2025-09-19 16:54:05.013422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:54:05.013434 | orchestrator | 2025-09-19 16:54:05.013444 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-19 16:54:05.013454 | orchestrator | Friday 19 September 2025 16:52:58 +0000 (0:00:01.363) 0:00:03.798 ****** 2025-09-19 16:54:05.013463 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:54:05.013473 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:54:05.013483 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:54:05.013492 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:54:05.013502 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:54:05.013522 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:54:05.013532 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.013541 | orchestrator | 2025-09-19 16:54:05.013551 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-19 16:54:05.013561 | orchestrator | Friday 19 September 2025 16:53:00 +0000 (0:00:01.946) 0:00:05.744 ****** 2025-09-19 16:54:05.013571 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:54:05.013580 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:54:05.013589 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:54:05.013599 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:54:05.013608 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:54:05.013618 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:54:05.013627 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.013644 | orchestrator | 2025-09-19 16:54:05.013654 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-19 16:54:05.013663 | orchestrator | Friday 19 September 2025 16:53:04 +0000 (0:00:04.528) 0:00:10.273 ****** 2025-09-19 16:54:05.013673 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:54:05.013683 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:54:05.013692 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.013702 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:54:05.013711 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:54:05.013721 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:54:05.013730 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:54:05.013740 | orchestrator | 2025-09-19 16:54:05.013749 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-19 16:54:05.013759 | orchestrator | Friday 19 September 2025 16:53:06 +0000 (0:00:01.959) 0:00:12.232 ****** 2025-09-19 16:54:05.013768 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:54:05.013778 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:54:05.013787 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:54:05.013797 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:54:05.013806 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:54:05.013816 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:54:05.013825 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.013885 | orchestrator | 2025-09-19 16:54:05.013896 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-19 16:54:05.013907 | orchestrator | Friday 19 September 2025 16:53:18 +0000 (0:00:11.571) 0:00:23.804 ****** 2025-09-19 16:54:05.013919 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:54:05.013935 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:54:05.013947 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:54:05.013958 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:54:05.013970 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:54:05.013981 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:54:05.013992 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.014003 | orchestrator | 2025-09-19 16:54:05.014014 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-19 16:54:05.014078 | orchestrator | Friday 19 September 2025 16:53:42 +0000 (0:00:24.399) 0:00:48.204 ****** 2025-09-19 16:54:05.014091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:54:05.014104 | orchestrator | 2025-09-19 16:54:05.014115 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-19 16:54:05.014127 | orchestrator | Friday 19 September 2025 16:53:43 +0000 (0:00:01.219) 0:00:49.424 ****** 2025-09-19 16:54:05.014138 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-19 16:54:05.014150 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-19 16:54:05.014161 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-19 16:54:05.014172 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-19 16:54:05.014184 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-19 16:54:05.014195 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-19 16:54:05.014206 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-19 16:54:05.014218 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-19 16:54:05.014229 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-19 16:54:05.014241 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-19 16:54:05.014252 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-19 16:54:05.014263 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-19 16:54:05.014275 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-19 16:54:05.014293 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-19 16:54:05.014302 | orchestrator | 2025-09-19 16:54:05.014312 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-19 16:54:05.014322 | orchestrator | Friday 19 September 2025 16:53:48 +0000 (0:00:04.474) 0:00:53.898 ****** 2025-09-19 16:54:05.014332 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.014341 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:54:05.014351 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:54:05.014360 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:54:05.014370 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:54:05.014379 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:54:05.014388 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:54:05.014398 | orchestrator | 2025-09-19 16:54:05.014408 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-19 16:54:05.014417 | orchestrator | Friday 19 September 2025 16:53:49 +0000 (0:00:01.287) 0:00:55.186 ****** 2025-09-19 16:54:05.014427 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.014436 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:54:05.014446 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:54:05.014455 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:54:05.014465 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:54:05.014474 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:54:05.014483 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:54:05.014493 | orchestrator | 2025-09-19 16:54:05.014502 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-19 16:54:05.014519 | orchestrator | Friday 19 September 2025 16:53:50 +0000 (0:00:01.208) 0:00:56.394 ****** 2025-09-19 16:54:05.014529 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.014539 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:54:05.014548 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:54:05.014558 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:54:05.014567 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:54:05.014577 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:54:05.014586 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:54:05.014596 | orchestrator | 2025-09-19 16:54:05.014605 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-19 16:54:05.014615 | orchestrator | Friday 19 September 2025 16:53:52 +0000 (0:00:01.484) 0:00:57.878 ****** 2025-09-19 16:54:05.014624 | orchestrator | ok: [testbed-manager] 2025-09-19 16:54:05.014633 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:54:05.014643 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:54:05.014652 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:54:05.014662 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:54:05.014671 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:54:05.014681 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:54:05.014690 | orchestrator | 2025-09-19 16:54:05.014699 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-19 16:54:05.014709 | orchestrator | Friday 19 September 2025 16:53:55 +0000 (0:00:03.621) 0:01:01.499 ****** 2025-09-19 16:54:05.014719 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-19 16:54:05.014730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:54:05.014740 | orchestrator | 2025-09-19 16:54:05.014750 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-19 16:54:05.014759 | orchestrator | Friday 19 September 2025 16:53:57 +0000 (0:00:01.339) 0:01:02.839 ****** 2025-09-19 16:54:05.014769 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.014778 | orchestrator | 2025-09-19 16:54:05.014788 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-19 16:54:05.014797 | orchestrator | Friday 19 September 2025 16:53:58 +0000 (0:00:01.676) 0:01:04.516 ****** 2025-09-19 16:54:05.014811 | orchestrator | changed: [testbed-manager] 2025-09-19 16:54:05.014825 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:54:05.014880 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:54:05.014890 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:54:05.014900 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:54:05.014909 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:54:05.014919 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:54:05.014928 | orchestrator | 2025-09-19 16:54:05.014938 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:54:05.014948 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.014958 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.014968 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.014978 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.014988 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.014997 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.015007 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:54:05.015017 | orchestrator | 2025-09-19 16:54:05.015026 | orchestrator | 2025-09-19 16:54:05.015036 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:54:05.015046 | orchestrator | Friday 19 September 2025 16:54:02 +0000 (0:00:03.721) 0:01:08.237 ****** 2025-09-19 16:54:05.015055 | orchestrator | =============================================================================== 2025-09-19 16:54:05.015065 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 24.40s 2025-09-19 16:54:05.015075 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.57s 2025-09-19 16:54:05.015084 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.53s 2025-09-19 16:54:05.015094 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.47s 2025-09-19 16:54:05.015103 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.72s 2025-09-19 16:54:05.015113 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.62s 2025-09-19 16:54:05.015123 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.96s 2025-09-19 16:54:05.015132 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.95s 2025-09-19 16:54:05.015142 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.84s 2025-09-19 16:54:05.015151 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.68s 2025-09-19 16:54:05.015161 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.48s 2025-09-19 16:54:05.015176 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.36s 2025-09-19 16:54:05.015186 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.34s 2025-09-19 16:54:05.015195 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.29s 2025-09-19 16:54:05.015205 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.22s 2025-09-19 16:54:05.015215 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.21s 2025-09-19 16:54:05.015224 | orchestrator | 2025-09-19 16:54:05 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state STARTED 2025-09-19 16:54:05.015240 | orchestrator | 2025-09-19 16:54:05 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:05.015250 | orchestrator | 2025-09-19 16:54:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:08.049690 | orchestrator | 2025-09-19 16:54:08 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:08.050014 | orchestrator | 2025-09-19 16:54:08 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:08.050253 | orchestrator | 2025-09-19 16:54:08 | INFO  | Task 62775ce2-ee36-4096-8316-b4098006d7e7 is in state SUCCESS 2025-09-19 16:54:08.051052 | orchestrator | 2025-09-19 16:54:08 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:08.051077 | orchestrator | 2025-09-19 16:54:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:11.086507 | orchestrator | 2025-09-19 16:54:11 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:11.086812 | orchestrator | 2025-09-19 16:54:11 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:11.088700 | orchestrator | 2025-09-19 16:54:11 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:11.088728 | orchestrator | 2025-09-19 16:54:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:14.120086 | orchestrator | 2025-09-19 16:54:14 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:14.120219 | orchestrator | 2025-09-19 16:54:14 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:14.121090 | orchestrator | 2025-09-19 16:54:14 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:14.121369 | orchestrator | 2025-09-19 16:54:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:17.156473 | orchestrator | 2025-09-19 16:54:17 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:17.158350 | orchestrator | 2025-09-19 16:54:17 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:17.159416 | orchestrator | 2025-09-19 16:54:17 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:17.159442 | orchestrator | 2025-09-19 16:54:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:20.205212 | orchestrator | 2025-09-19 16:54:20 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:20.205649 | orchestrator | 2025-09-19 16:54:20 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:20.206815 | orchestrator | 2025-09-19 16:54:20 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:20.206880 | orchestrator | 2025-09-19 16:54:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:23.248697 | orchestrator | 2025-09-19 16:54:23 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:23.248796 | orchestrator | 2025-09-19 16:54:23 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:23.250765 | orchestrator | 2025-09-19 16:54:23 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:23.251050 | orchestrator | 2025-09-19 16:54:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:26.310222 | orchestrator | 2025-09-19 16:54:26 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:26.314188 | orchestrator | 2025-09-19 16:54:26 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:26.315326 | orchestrator | 2025-09-19 16:54:26 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:26.315357 | orchestrator | 2025-09-19 16:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:29.391301 | orchestrator | 2025-09-19 16:54:29 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:29.396372 | orchestrator | 2025-09-19 16:54:29 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:29.397568 | orchestrator | 2025-09-19 16:54:29 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:29.398069 | orchestrator | 2025-09-19 16:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:32.437682 | orchestrator | 2025-09-19 16:54:32 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:32.439260 | orchestrator | 2025-09-19 16:54:32 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:32.440348 | orchestrator | 2025-09-19 16:54:32 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:32.440647 | orchestrator | 2025-09-19 16:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:35.467374 | orchestrator | 2025-09-19 16:54:35 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:35.467482 | orchestrator | 2025-09-19 16:54:35 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:35.468024 | orchestrator | 2025-09-19 16:54:35 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:35.468049 | orchestrator | 2025-09-19 16:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:38.502549 | orchestrator | 2025-09-19 16:54:38 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:38.503633 | orchestrator | 2025-09-19 16:54:38 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:38.504726 | orchestrator | 2025-09-19 16:54:38 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:38.504758 | orchestrator | 2025-09-19 16:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:41.543208 | orchestrator | 2025-09-19 16:54:41 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:41.543874 | orchestrator | 2025-09-19 16:54:41 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:41.544795 | orchestrator | 2025-09-19 16:54:41 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:41.544819 | orchestrator | 2025-09-19 16:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:44.587671 | orchestrator | 2025-09-19 16:54:44 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:44.589093 | orchestrator | 2025-09-19 16:54:44 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:44.590209 | orchestrator | 2025-09-19 16:54:44 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:44.590791 | orchestrator | 2025-09-19 16:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:47.623045 | orchestrator | 2025-09-19 16:54:47 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:47.624131 | orchestrator | 2025-09-19 16:54:47 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:47.626901 | orchestrator | 2025-09-19 16:54:47 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:47.627064 | orchestrator | 2025-09-19 16:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:50.672779 | orchestrator | 2025-09-19 16:54:50 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:50.674591 | orchestrator | 2025-09-19 16:54:50 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:50.677385 | orchestrator | 2025-09-19 16:54:50 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:50.677415 | orchestrator | 2025-09-19 16:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:53.719396 | orchestrator | 2025-09-19 16:54:53 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:53.720706 | orchestrator | 2025-09-19 16:54:53 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:53.722522 | orchestrator | 2025-09-19 16:54:53 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:53.722562 | orchestrator | 2025-09-19 16:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:56.755715 | orchestrator | 2025-09-19 16:54:56 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:56.756290 | orchestrator | 2025-09-19 16:54:56 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:56.757789 | orchestrator | 2025-09-19 16:54:56 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:56.757827 | orchestrator | 2025-09-19 16:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:54:59.796761 | orchestrator | 2025-09-19 16:54:59 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:54:59.798403 | orchestrator | 2025-09-19 16:54:59 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:54:59.800101 | orchestrator | 2025-09-19 16:54:59 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:54:59.800137 | orchestrator | 2025-09-19 16:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:02.846974 | orchestrator | 2025-09-19 16:55:02 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:55:02.847352 | orchestrator | 2025-09-19 16:55:02 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:02.851235 | orchestrator | 2025-09-19 16:55:02 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:02.851261 | orchestrator | 2025-09-19 16:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:05.879040 | orchestrator | 2025-09-19 16:55:05 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state STARTED 2025-09-19 16:55:05.879262 | orchestrator | 2025-09-19 16:55:05 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:05.880059 | orchestrator | 2025-09-19 16:55:05 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:05.880097 | orchestrator | 2025-09-19 16:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:08.923828 | orchestrator | 2025-09-19 16:55:08.923933 | orchestrator | 2025-09-19 16:55:08.923949 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-19 16:55:08.923962 | orchestrator | 2025-09-19 16:55:08.923974 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-19 16:55:08.923985 | orchestrator | Friday 19 September 2025 16:53:14 +0000 (0:00:00.241) 0:00:00.241 ****** 2025-09-19 16:55:08.924013 | orchestrator | ok: [testbed-manager] 2025-09-19 16:55:08.924025 | orchestrator | 2025-09-19 16:55:08.924054 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-19 16:55:08.924066 | orchestrator | Friday 19 September 2025 16:53:15 +0000 (0:00:00.911) 0:00:01.153 ****** 2025-09-19 16:55:08.924077 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-19 16:55:08.924088 | orchestrator | 2025-09-19 16:55:08.924099 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-19 16:55:08.924110 | orchestrator | Friday 19 September 2025 16:53:16 +0000 (0:00:00.627) 0:00:01.781 ****** 2025-09-19 16:55:08.924120 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.924131 | orchestrator | 2025-09-19 16:55:08.924141 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-19 16:55:08.924152 | orchestrator | Friday 19 September 2025 16:53:17 +0000 (0:00:00.966) 0:00:02.748 ****** 2025-09-19 16:55:08.924163 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-19 16:55:08.924174 | orchestrator | ok: [testbed-manager] 2025-09-19 16:55:08.924185 | orchestrator | 2025-09-19 16:55:08.924195 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-19 16:55:08.924206 | orchestrator | Friday 19 September 2025 16:54:02 +0000 (0:00:45.409) 0:00:48.157 ****** 2025-09-19 16:55:08.924216 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.924227 | orchestrator | 2025-09-19 16:55:08.924238 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:55:08.924249 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:08.924260 | orchestrator | 2025-09-19 16:55:08.924271 | orchestrator | 2025-09-19 16:55:08.924282 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:55:08.924293 | orchestrator | Friday 19 September 2025 16:54:06 +0000 (0:00:04.264) 0:00:52.422 ****** 2025-09-19 16:55:08.924304 | orchestrator | =============================================================================== 2025-09-19 16:55:08.924314 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 45.41s 2025-09-19 16:55:08.924325 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.26s 2025-09-19 16:55:08.924336 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 0.97s 2025-09-19 16:55:08.924347 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.91s 2025-09-19 16:55:08.924360 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2025-09-19 16:55:08.924378 | orchestrator | 2025-09-19 16:55:08.924397 | orchestrator | 2025-09-19 16:55:08.924415 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-19 16:55:08.924434 | orchestrator | 2025-09-19 16:55:08.924452 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 16:55:08.924471 | orchestrator | Friday 19 September 2025 16:52:47 +0000 (0:00:00.258) 0:00:00.258 ****** 2025-09-19 16:55:08.924490 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:55:08.924510 | orchestrator | 2025-09-19 16:55:08.924529 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-19 16:55:08.924550 | orchestrator | Friday 19 September 2025 16:52:48 +0000 (0:00:01.273) 0:00:01.532 ****** 2025-09-19 16:55:08.924569 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924588 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924601 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924613 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924637 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924649 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924662 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924674 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924686 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924699 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924711 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924740 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-19 16:55:08.924753 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924766 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924778 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924803 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924868 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924883 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-19 16:55:08.924894 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924905 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924916 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-19 16:55:08.924926 | orchestrator | 2025-09-19 16:55:08.924937 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-19 16:55:08.924948 | orchestrator | Friday 19 September 2025 16:52:52 +0000 (0:00:04.106) 0:00:05.639 ****** 2025-09-19 16:55:08.924959 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:55:08.924971 | orchestrator | 2025-09-19 16:55:08.924982 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-19 16:55:08.924993 | orchestrator | Friday 19 September 2025 16:52:54 +0000 (0:00:01.262) 0:00:06.902 ****** 2025-09-19 16:55:08.925007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925066 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925082 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925101 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.925114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925137 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925183 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925218 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925388 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925423 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.925449 | orchestrator | 2025-09-19 16:55:08.925460 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-19 16:55:08.925472 | orchestrator | Friday 19 September 2025 16:52:59 +0000 (0:00:05.578) 0:00:12.481 ****** 2025-09-19 16:55:08.925504 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925517 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925540 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:55:08.925552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925571 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925593 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:55:08.925604 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925700 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:55:08.925711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925779 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925791 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:55:08.925807 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:55:08.925818 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:55:08.925878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925916 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:55:08.925927 | orchestrator | 2025-09-19 16:55:08.925938 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-19 16:55:08.925949 | orchestrator | Friday 19 September 2025 16:53:01 +0000 (0:00:01.476) 0:00:13.957 ****** 2025-09-19 16:55:08.925961 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.925978 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.925997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926009 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:55:08.926078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:55:08.926168 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926192 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926214 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926235 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:55:08.926245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926275 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926303 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:55:08.926314 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:55:08.926329 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:55:08.926339 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-19 16:55:08.926349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.926369 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:55:08.926379 | orchestrator | 2025-09-19 16:55:08.926388 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-19 16:55:08.926398 | orchestrator | Friday 19 September 2025 16:53:04 +0000 (0:00:02.907) 0:00:16.864 ****** 2025-09-19 16:55:08.926408 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:55:08.926418 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:55:08.926427 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:55:08.926437 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:55:08.926447 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:55:08.926456 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:55:08.926465 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:55:08.926475 | orchestrator | 2025-09-19 16:55:08.926485 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-19 16:55:08.926495 | orchestrator | Friday 19 September 2025 16:53:04 +0000 (0:00:00.771) 0:00:17.635 ****** 2025-09-19 16:55:08.926504 | orchestrator | skipping: [testbed-manager] 2025-09-19 16:55:08.926514 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:55:08.926524 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:55:08.926533 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:55:08.926543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:55:08.926552 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:55:08.926562 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:55:08.926571 | orchestrator | 2025-09-19 16:55:08.926581 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-19 16:55:08.926591 | orchestrator | Friday 19 September 2025 16:53:06 +0000 (0:00:01.238) 0:00:18.874 ****** 2025-09-19 16:55:08.926601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926611 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926647 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926732 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.926742 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926752 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926817 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926827 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926864 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.926874 | orchestrator | 2025-09-19 16:55:08.926884 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-19 16:55:08.926894 | orchestrator | Friday 19 September 2025 16:53:14 +0000 (0:00:08.154) 0:00:27.028 ****** 2025-09-19 16:55:08.926904 | orchestrator | [WARNING]: Skipped 2025-09-19 16:55:08.926915 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-19 16:55:08.926924 | orchestrator | to this access issue: 2025-09-19 16:55:08.926934 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-19 16:55:08.926944 | orchestrator | directory 2025-09-19 16:55:08.926954 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:55:08.926963 | orchestrator | 2025-09-19 16:55:08.926973 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-19 16:55:08.926983 | orchestrator | Friday 19 September 2025 16:53:15 +0000 (0:00:00.946) 0:00:27.975 ****** 2025-09-19 16:55:08.926992 | orchestrator | [WARNING]: Skipped 2025-09-19 16:55:08.927002 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-19 16:55:08.927012 | orchestrator | to this access issue: 2025-09-19 16:55:08.927022 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-19 16:55:08.927031 | orchestrator | directory 2025-09-19 16:55:08.927041 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:55:08.927056 | orchestrator | 2025-09-19 16:55:08.927066 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-19 16:55:08.927076 | orchestrator | Friday 19 September 2025 16:53:16 +0000 (0:00:01.300) 0:00:29.275 ****** 2025-09-19 16:55:08.927086 | orchestrator | [WARNING]: Skipped 2025-09-19 16:55:08.927096 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-19 16:55:08.927105 | orchestrator | to this access issue: 2025-09-19 16:55:08.927115 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-19 16:55:08.927125 | orchestrator | directory 2025-09-19 16:55:08.927135 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:55:08.927144 | orchestrator | 2025-09-19 16:55:08.927154 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-19 16:55:08.927164 | orchestrator | Friday 19 September 2025 16:53:17 +0000 (0:00:01.001) 0:00:30.277 ****** 2025-09-19 16:55:08.927174 | orchestrator | [WARNING]: Skipped 2025-09-19 16:55:08.927188 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-19 16:55:08.927206 | orchestrator | to this access issue: 2025-09-19 16:55:08.927225 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-19 16:55:08.927242 | orchestrator | directory 2025-09-19 16:55:08.927259 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 16:55:08.927276 | orchestrator | 2025-09-19 16:55:08.927292 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-19 16:55:08.927309 | orchestrator | Friday 19 September 2025 16:53:18 +0000 (0:00:01.079) 0:00:31.357 ****** 2025-09-19 16:55:08.927326 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.927345 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.927363 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.927382 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.927399 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.927417 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.927434 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.927448 | orchestrator | 2025-09-19 16:55:08.927458 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-19 16:55:08.927467 | orchestrator | Friday 19 September 2025 16:53:22 +0000 (0:00:03.954) 0:00:35.311 ****** 2025-09-19 16:55:08.927477 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927520 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927531 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927540 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927550 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-19 16:55:08.927559 | orchestrator | 2025-09-19 16:55:08.927569 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-19 16:55:08.927579 | orchestrator | Friday 19 September 2025 16:53:25 +0000 (0:00:02.812) 0:00:38.124 ****** 2025-09-19 16:55:08.927588 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.927598 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.927607 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.927617 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.927626 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.927635 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.927652 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.927662 | orchestrator | 2025-09-19 16:55:08.927671 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-19 16:55:08.927681 | orchestrator | Friday 19 September 2025 16:53:29 +0000 (0:00:03.731) 0:00:41.855 ****** 2025-09-19 16:55:08.927691 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927712 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927722 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927732 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927754 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927791 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927804 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927814 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927824 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927851 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927872 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927883 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927902 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927912 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927922 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.927932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': 2025-09-19 16:55:08 | INFO  | Task f1c6fa5d-7a02-4de3-8ebb-3245ac9648be is in state SUCCESS 2025-09-19 16:55:08.927943 | orchestrator | True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 16:55:08.927954 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927964 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.927974 | orchestrator | 2025-09-19 16:55:08.927984 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-19 16:55:08.927993 | orchestrator | Friday 19 September 2025 16:53:32 +0000 (0:00:03.662) 0:00:45.518 ****** 2025-09-19 16:55:08.928003 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928019 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928033 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928043 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928056 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928066 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928076 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-19 16:55:08.928085 | orchestrator | 2025-09-19 16:55:08.928095 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-19 16:55:08.928104 | orchestrator | Friday 19 September 2025 16:53:36 +0000 (0:00:03.328) 0:00:48.847 ****** 2025-09-19 16:55:08.928114 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928123 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928133 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928142 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928152 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928161 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928171 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-19 16:55:08.928180 | orchestrator | 2025-09-19 16:55:08.928190 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-19 16:55:08.928199 | orchestrator | Friday 19 September 2025 16:53:38 +0000 (0:00:02.742) 0:00:51.589 ****** 2025-09-19 16:55:08.928209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928239 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928294 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-19 16:55:08.928304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928325 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928344 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928381 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928411 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928422 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928437 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:55:08.928471 | orchestrator | 2025-09-19 16:55:08.928481 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-19 16:55:08.928491 | orchestrator | Friday 19 September 2025 16:53:42 +0000 (0:00:03.629) 0:00:55.219 ****** 2025-09-19 16:55:08.928500 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.928510 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.928519 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.928529 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.928538 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.928547 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.928557 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.928566 | orchestrator | 2025-09-19 16:55:08.928576 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-19 16:55:08.928585 | orchestrator | Friday 19 September 2025 16:53:44 +0000 (0:00:01.447) 0:00:56.666 ****** 2025-09-19 16:55:08.928595 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.928604 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.928614 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.928623 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.928633 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.928642 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.928652 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.928661 | orchestrator | 2025-09-19 16:55:08.928671 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928681 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:01.465) 0:00:58.132 ****** 2025-09-19 16:55:08.928690 | orchestrator | 2025-09-19 16:55:08.928700 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928709 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:00.072) 0:00:58.205 ****** 2025-09-19 16:55:08.928718 | orchestrator | 2025-09-19 16:55:08.928728 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928737 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:00.087) 0:00:58.293 ****** 2025-09-19 16:55:08.928747 | orchestrator | 2025-09-19 16:55:08.928757 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928766 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:00.080) 0:00:58.374 ****** 2025-09-19 16:55:08.928775 | orchestrator | 2025-09-19 16:55:08.928785 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928795 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:00.245) 0:00:58.619 ****** 2025-09-19 16:55:08.928804 | orchestrator | 2025-09-19 16:55:08.928814 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928829 | orchestrator | Friday 19 September 2025 16:53:46 +0000 (0:00:00.108) 0:00:58.728 ****** 2025-09-19 16:55:08.928883 | orchestrator | 2025-09-19 16:55:08.928893 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-19 16:55:08.928903 | orchestrator | Friday 19 September 2025 16:53:46 +0000 (0:00:00.070) 0:00:58.799 ****** 2025-09-19 16:55:08.928912 | orchestrator | 2025-09-19 16:55:08.928922 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-19 16:55:08.928931 | orchestrator | Friday 19 September 2025 16:53:46 +0000 (0:00:00.090) 0:00:58.889 ****** 2025-09-19 16:55:08.928941 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.928950 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.928960 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.928970 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.928979 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.928989 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.928998 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.929008 | orchestrator | 2025-09-19 16:55:08.929017 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-19 16:55:08.929027 | orchestrator | Friday 19 September 2025 16:54:22 +0000 (0:00:35.913) 0:01:34.803 ****** 2025-09-19 16:55:08.929037 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.929046 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.929056 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.929065 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.929075 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.929084 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.929094 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.929103 | orchestrator | 2025-09-19 16:55:08.929113 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-19 16:55:08.929122 | orchestrator | Friday 19 September 2025 16:54:54 +0000 (0:00:32.428) 0:02:07.231 ****** 2025-09-19 16:55:08.929132 | orchestrator | ok: [testbed-manager] 2025-09-19 16:55:08.929141 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:55:08.929151 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:55:08.929161 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:55:08.929171 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:55:08.929180 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:55:08.929189 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:55:08.929199 | orchestrator | 2025-09-19 16:55:08.929209 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-19 16:55:08.929218 | orchestrator | Friday 19 September 2025 16:54:56 +0000 (0:00:01.872) 0:02:09.104 ****** 2025-09-19 16:55:08.929228 | orchestrator | changed: [testbed-manager] 2025-09-19 16:55:08.929237 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:55:08.929247 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:55:08.929257 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:08.929266 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:08.929276 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:08.929285 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:55:08.929295 | orchestrator | 2025-09-19 16:55:08.929304 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:55:08.929319 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929336 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929347 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929357 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929366 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929382 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929392 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-19 16:55:08.929401 | orchestrator | 2025-09-19 16:55:08.929408 | orchestrator | 2025-09-19 16:55:08.929416 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:55:08.929424 | orchestrator | Friday 19 September 2025 16:55:05 +0000 (0:00:09.133) 0:02:18.237 ****** 2025-09-19 16:55:08.929432 | orchestrator | =============================================================================== 2025-09-19 16:55:08.929439 | orchestrator | common : Restart fluentd container ------------------------------------- 35.91s 2025-09-19 16:55:08.929447 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 32.43s 2025-09-19 16:55:08.929455 | orchestrator | common : Restart cron container ----------------------------------------- 9.13s 2025-09-19 16:55:08.929463 | orchestrator | common : Copying over config.json files for services -------------------- 8.15s 2025-09-19 16:55:08.929470 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.58s 2025-09-19 16:55:08.929478 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.11s 2025-09-19 16:55:08.929486 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.95s 2025-09-19 16:55:08.929494 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.73s 2025-09-19 16:55:08.929501 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.66s 2025-09-19 16:55:08.929509 | orchestrator | common : Check common containers ---------------------------------------- 3.63s 2025-09-19 16:55:08.929517 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.33s 2025-09-19 16:55:08.929525 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.91s 2025-09-19 16:55:08.929532 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.81s 2025-09-19 16:55:08.929540 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.74s 2025-09-19 16:55:08.929548 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.87s 2025-09-19 16:55:08.929555 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.48s 2025-09-19 16:55:08.929563 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.47s 2025-09-19 16:55:08.929571 | orchestrator | common : Creating log volume -------------------------------------------- 1.45s 2025-09-19 16:55:08.929579 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.30s 2025-09-19 16:55:08.929586 | orchestrator | common : include_tasks -------------------------------------------------- 1.27s 2025-09-19 16:55:08.929594 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:08.929602 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:08.929610 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:08.929618 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:08.929626 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:08.929634 | orchestrator | 2025-09-19 16:55:08 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:08.929641 | orchestrator | 2025-09-19 16:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:11.949772 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:11.950200 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:11.950798 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:11.951604 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:11.952026 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:11.952706 | orchestrator | 2025-09-19 16:55:11 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:11.952802 | orchestrator | 2025-09-19 16:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:14.985346 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:14.985573 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:14.986206 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:14.988206 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:14.991644 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:14.992200 | orchestrator | 2025-09-19 16:55:14 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:14.992223 | orchestrator | 2025-09-19 16:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:18.030245 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:18.030420 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:18.030995 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:18.031455 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:18.032483 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:18.033040 | orchestrator | 2025-09-19 16:55:18 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:18.033062 | orchestrator | 2025-09-19 16:55:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:21.077411 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:21.078647 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:21.085115 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:21.086195 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:21.086883 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:21.087680 | orchestrator | 2025-09-19 16:55:21 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:21.087692 | orchestrator | 2025-09-19 16:55:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:24.118704 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:24.120469 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:24.124374 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:24.126407 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:24.128778 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:24.130805 | orchestrator | 2025-09-19 16:55:24 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:24.131084 | orchestrator | 2025-09-19 16:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:27.160094 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:27.161086 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state STARTED 2025-09-19 16:55:27.162958 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:27.163972 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:27.164680 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:27.165964 | orchestrator | 2025-09-19 16:55:27 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:27.165991 | orchestrator | 2025-09-19 16:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:30.222015 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:30.222207 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task bafdf1f5-f4e0-41f8-9352-4c55edb35ea9 is in state SUCCESS 2025-09-19 16:55:30.222232 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:30.222252 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:30.222271 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:30.222290 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:30.222309 | orchestrator | 2025-09-19 16:55:30 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:30.222328 | orchestrator | 2025-09-19 16:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:33.301919 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:33.302066 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:33.302083 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state STARTED 2025-09-19 16:55:33.302095 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:33.302106 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:33.302116 | orchestrator | 2025-09-19 16:55:33 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:33.302157 | orchestrator | 2025-09-19 16:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:36.311241 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:36.313214 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:36.313610 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task 902b9a0c-3bf5-44e4-9398-d545cc9caa28 is in state SUCCESS 2025-09-19 16:55:36.315010 | orchestrator | 2025-09-19 16:55:36.315054 | orchestrator | 2025-09-19 16:55:36.315066 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:55:36.315078 | orchestrator | 2025-09-19 16:55:36.315090 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 16:55:36.315101 | orchestrator | Friday 19 September 2025 16:55:11 +0000 (0:00:00.478) 0:00:00.478 ****** 2025-09-19 16:55:36.315113 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:55:36.315123 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:55:36.315133 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:55:36.315142 | orchestrator | 2025-09-19 16:55:36.315152 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:55:36.315161 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.467) 0:00:00.946 ****** 2025-09-19 16:55:36.315172 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-19 16:55:36.315181 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-19 16:55:36.315191 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-19 16:55:36.315200 | orchestrator | 2025-09-19 16:55:36.315209 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-19 16:55:36.315219 | orchestrator | 2025-09-19 16:55:36.315229 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-19 16:55:36.315239 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:00.886) 0:00:01.832 ****** 2025-09-19 16:55:36.315248 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:55:36.315258 | orchestrator | 2025-09-19 16:55:36.315268 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-19 16:55:36.315277 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.822) 0:00:02.655 ****** 2025-09-19 16:55:36.315287 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 16:55:36.315297 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 16:55:36.315306 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 16:55:36.315315 | orchestrator | 2025-09-19 16:55:36.315325 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-19 16:55:36.315334 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:01.077) 0:00:03.733 ****** 2025-09-19 16:55:36.315360 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-19 16:55:36.315371 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-19 16:55:36.315380 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-19 16:55:36.315390 | orchestrator | 2025-09-19 16:55:36.315399 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-19 16:55:36.315409 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:02.312) 0:00:06.045 ****** 2025-09-19 16:55:36.315418 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:36.315428 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:36.315437 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:36.315447 | orchestrator | 2025-09-19 16:55:36.315456 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-19 16:55:36.315466 | orchestrator | Friday 19 September 2025 16:55:20 +0000 (0:00:02.507) 0:00:08.552 ****** 2025-09-19 16:55:36.315475 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:36.315485 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:36.315512 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:36.315522 | orchestrator | 2025-09-19 16:55:36.315532 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:55:36.315541 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.315552 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.315562 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.315571 | orchestrator | 2025-09-19 16:55:36.315581 | orchestrator | 2025-09-19 16:55:36.315590 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:55:36.315600 | orchestrator | Friday 19 September 2025 16:55:27 +0000 (0:00:07.239) 0:00:15.791 ****** 2025-09-19 16:55:36.315609 | orchestrator | =============================================================================== 2025-09-19 16:55:36.315618 | orchestrator | memcached : Restart memcached container --------------------------------- 7.24s 2025-09-19 16:55:36.315628 | orchestrator | memcached : Check memcached container ----------------------------------- 2.51s 2025-09-19 16:55:36.315637 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.31s 2025-09-19 16:55:36.315647 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.08s 2025-09-19 16:55:36.315656 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-09-19 16:55:36.315665 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.82s 2025-09-19 16:55:36.315675 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.47s 2025-09-19 16:55:36.315684 | orchestrator | 2025-09-19 16:55:36.315693 | orchestrator | 2025-09-19 16:55:36.315703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:55:36.315712 | orchestrator | 2025-09-19 16:55:36.315722 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 16:55:36.315731 | orchestrator | Friday 19 September 2025 16:55:11 +0000 (0:00:00.354) 0:00:00.354 ****** 2025-09-19 16:55:36.315741 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:55:36.315750 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:55:36.315760 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:55:36.315769 | orchestrator | 2025-09-19 16:55:36.315819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:55:36.315859 | orchestrator | Friday 19 September 2025 16:55:11 +0000 (0:00:00.325) 0:00:00.680 ****** 2025-09-19 16:55:36.315870 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-19 16:55:36.315879 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-19 16:55:36.315889 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-19 16:55:36.315899 | orchestrator | 2025-09-19 16:55:36.315908 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-19 16:55:36.315918 | orchestrator | 2025-09-19 16:55:36.315927 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-19 16:55:36.315937 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.506) 0:00:01.187 ****** 2025-09-19 16:55:36.315947 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:55:36.315956 | orchestrator | 2025-09-19 16:55:36.315966 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-19 16:55:36.315976 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.659) 0:00:01.847 ****** 2025-09-19 16:55:36.315988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316077 | orchestrator | 2025-09-19 16:55:36.316087 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-19 16:55:36.316096 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:01.680) 0:00:03.527 ****** 2025-09-19 16:55:36.316107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316210 | orchestrator | 2025-09-19 16:55:36.316220 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-19 16:55:36.316229 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:03.206) 0:00:06.734 ****** 2025-09-19 16:55:36.316240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316312 | orchestrator | 2025-09-19 16:55:36.316326 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-19 16:55:36.316336 | orchestrator | Friday 19 September 2025 16:55:20 +0000 (0:00:03.082) 0:00:09.816 ****** 2025-09-19 16:55:36.316347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-19 16:55:36.316418 | orchestrator | 2025-09-19 16:55:36.316428 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 16:55:36.316438 | orchestrator | Friday 19 September 2025 16:55:22 +0000 (0:00:01.561) 0:00:11.378 ****** 2025-09-19 16:55:36.316447 | orchestrator | 2025-09-19 16:55:36.316457 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 16:55:36.316477 | orchestrator | Friday 19 September 2025 16:55:22 +0000 (0:00:00.062) 0:00:11.441 ****** 2025-09-19 16:55:36.316487 | orchestrator | 2025-09-19 16:55:36.316497 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-19 16:55:36.316507 | orchestrator | Friday 19 September 2025 16:55:22 +0000 (0:00:00.064) 0:00:11.506 ****** 2025-09-19 16:55:36.316516 | orchestrator | 2025-09-19 16:55:36.316526 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-19 16:55:36.316535 | orchestrator | Friday 19 September 2025 16:55:22 +0000 (0:00:00.066) 0:00:11.572 ****** 2025-09-19 16:55:36.316545 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:36.316554 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:36.316594 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:36.316604 | orchestrator | 2025-09-19 16:55:36.316614 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-19 16:55:36.316624 | orchestrator | Friday 19 September 2025 16:55:25 +0000 (0:00:02.533) 0:00:14.106 ****** 2025-09-19 16:55:36.316634 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:55:36.316643 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:55:36.316653 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:55:36.316663 | orchestrator | 2025-09-19 16:55:36.316672 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:55:36.316682 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.316693 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.316702 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:55:36.316712 | orchestrator | 2025-09-19 16:55:36.316722 | orchestrator | 2025-09-19 16:55:36.316732 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:55:36.316741 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:08.354) 0:00:22.460 ****** 2025-09-19 16:55:36.316751 | orchestrator | =============================================================================== 2025-09-19 16:55:36.316766 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.35s 2025-09-19 16:55:36.316776 | orchestrator | redis : Copying over default config.json files -------------------------- 3.21s 2025-09-19 16:55:36.316786 | orchestrator | redis : Copying over redis config files --------------------------------- 3.08s 2025-09-19 16:55:36.316796 | orchestrator | redis : Restart redis container ----------------------------------------- 2.53s 2025-09-19 16:55:36.316805 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.68s 2025-09-19 16:55:36.316815 | orchestrator | redis : Check redis containers ------------------------------------------ 1.56s 2025-09-19 16:55:36.316825 | orchestrator | redis : include_tasks --------------------------------------------------- 0.66s 2025-09-19 16:55:36.316852 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-09-19 16:55:36.316862 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-09-19 16:55:36.316872 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2025-09-19 16:55:36.316968 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:36.316981 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:36.319090 | orchestrator | 2025-09-19 16:55:36 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:36.319137 | orchestrator | 2025-09-19 16:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:39.374463 | orchestrator | 2025-09-19 16:55:39 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:39.374574 | orchestrator | 2025-09-19 16:55:39 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:39.374585 | orchestrator | 2025-09-19 16:55:39 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:39.374592 | orchestrator | 2025-09-19 16:55:39 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:39.374597 | orchestrator | 2025-09-19 16:55:39 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:39.374604 | orchestrator | 2025-09-19 16:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:42.390702 | orchestrator | 2025-09-19 16:55:42 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:42.390802 | orchestrator | 2025-09-19 16:55:42 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:42.390816 | orchestrator | 2025-09-19 16:55:42 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:42.390827 | orchestrator | 2025-09-19 16:55:42 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:42.390899 | orchestrator | 2025-09-19 16:55:42 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:42.390910 | orchestrator | 2025-09-19 16:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:45.438313 | orchestrator | 2025-09-19 16:55:45 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:45.438424 | orchestrator | 2025-09-19 16:55:45 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:45.438448 | orchestrator | 2025-09-19 16:55:45 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:45.438468 | orchestrator | 2025-09-19 16:55:45 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:45.438487 | orchestrator | 2025-09-19 16:55:45 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:45.438506 | orchestrator | 2025-09-19 16:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:48.466671 | orchestrator | 2025-09-19 16:55:48 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:48.468267 | orchestrator | 2025-09-19 16:55:48 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:48.470290 | orchestrator | 2025-09-19 16:55:48 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:48.473460 | orchestrator | 2025-09-19 16:55:48 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:48.473521 | orchestrator | 2025-09-19 16:55:48 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:48.473545 | orchestrator | 2025-09-19 16:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:51.596311 | orchestrator | 2025-09-19 16:55:51 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:51.596780 | orchestrator | 2025-09-19 16:55:51 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:51.597383 | orchestrator | 2025-09-19 16:55:51 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:51.598378 | orchestrator | 2025-09-19 16:55:51 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:51.599538 | orchestrator | 2025-09-19 16:55:51 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:51.599661 | orchestrator | 2025-09-19 16:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:54.695661 | orchestrator | 2025-09-19 16:55:54 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:54.695772 | orchestrator | 2025-09-19 16:55:54 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:54.695787 | orchestrator | 2025-09-19 16:55:54 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:54.695799 | orchestrator | 2025-09-19 16:55:54 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:54.695810 | orchestrator | 2025-09-19 16:55:54 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:54.695821 | orchestrator | 2025-09-19 16:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:55:57.668329 | orchestrator | 2025-09-19 16:55:57 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:55:57.668562 | orchestrator | 2025-09-19 16:55:57 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:55:57.669285 | orchestrator | 2025-09-19 16:55:57 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:55:57.670119 | orchestrator | 2025-09-19 16:55:57 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:55:57.672526 | orchestrator | 2025-09-19 16:55:57 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:55:57.675225 | orchestrator | 2025-09-19 16:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:00.716732 | orchestrator | 2025-09-19 16:56:00 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:00.717722 | orchestrator | 2025-09-19 16:56:00 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:00.718508 | orchestrator | 2025-09-19 16:56:00 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:56:00.719313 | orchestrator | 2025-09-19 16:56:00 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:00.721907 | orchestrator | 2025-09-19 16:56:00 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:00.721928 | orchestrator | 2025-09-19 16:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:03.762174 | orchestrator | 2025-09-19 16:56:03 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:03.762273 | orchestrator | 2025-09-19 16:56:03 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:03.766883 | orchestrator | 2025-09-19 16:56:03 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:56:03.766923 | orchestrator | 2025-09-19 16:56:03 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:03.767535 | orchestrator | 2025-09-19 16:56:03 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:03.767820 | orchestrator | 2025-09-19 16:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:06.811158 | orchestrator | 2025-09-19 16:56:06 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:06.813433 | orchestrator | 2025-09-19 16:56:06 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:06.814297 | orchestrator | 2025-09-19 16:56:06 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state STARTED 2025-09-19 16:56:06.815098 | orchestrator | 2025-09-19 16:56:06 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:06.816082 | orchestrator | 2025-09-19 16:56:06 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:06.816359 | orchestrator | 2025-09-19 16:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:09.892809 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:09.894003 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:09.895071 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:09.896702 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task 404e0476-c2ee-4b99-a09f-5907cb833c29 is in state SUCCESS 2025-09-19 16:56:09.898435 | orchestrator | 2025-09-19 16:56:09.898472 | orchestrator | 2025-09-19 16:56:09.898485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:56:09.898497 | orchestrator | 2025-09-19 16:56:09.898508 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 16:56:09.898520 | orchestrator | Friday 19 September 2025 16:55:11 +0000 (0:00:00.402) 0:00:00.403 ****** 2025-09-19 16:56:09.898531 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:09.898542 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:09.898553 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:09.898568 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:09.898580 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:09.898590 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:09.898601 | orchestrator | 2025-09-19 16:56:09.898647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:56:09.898660 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.864) 0:00:01.268 ****** 2025-09-19 16:56:09.898670 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898682 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898693 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898704 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898714 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898725 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-19 16:56:09.898736 | orchestrator | 2025-09-19 16:56:09.898746 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-19 16:56:09.898757 | orchestrator | 2025-09-19 16:56:09.898768 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-19 16:56:09.898779 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:00.995) 0:00:02.263 ****** 2025-09-19 16:56:09.898790 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:56:09.898802 | orchestrator | 2025-09-19 16:56:09.898813 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 16:56:09.898824 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:02.030) 0:00:04.294 ****** 2025-09-19 16:56:09.898861 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 16:56:09.898873 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 16:56:09.898884 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 16:56:09.898894 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 16:56:09.898905 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 16:56:09.898916 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 16:56:09.898950 | orchestrator | 2025-09-19 16:56:09.898962 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 16:56:09.898973 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:01.787) 0:00:06.081 ****** 2025-09-19 16:56:09.898984 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-19 16:56:09.898995 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-19 16:56:09.899006 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-19 16:56:09.899017 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-19 16:56:09.899028 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-19 16:56:09.899041 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-19 16:56:09.899053 | orchestrator | 2025-09-19 16:56:09.899065 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 16:56:09.899078 | orchestrator | Friday 19 September 2025 16:55:19 +0000 (0:00:02.399) 0:00:08.481 ****** 2025-09-19 16:56:09.899090 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-19 16:56:09.899102 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:09.899115 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-19 16:56:09.899128 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:09.899140 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-19 16:56:09.899152 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-19 16:56:09.899164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:09.899176 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-19 16:56:09.899188 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:09.899200 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:09.899213 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-19 16:56:09.899226 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:09.899239 | orchestrator | 2025-09-19 16:56:09.899251 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-19 16:56:09.899264 | orchestrator | Friday 19 September 2025 16:55:21 +0000 (0:00:01.094) 0:00:09.576 ****** 2025-09-19 16:56:09.899276 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:09.899288 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:09.899300 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:09.899322 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:09.899334 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:09.899347 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:09.899359 | orchestrator | 2025-09-19 16:56:09.899372 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-19 16:56:09.899385 | orchestrator | Friday 19 September 2025 16:55:21 +0000 (0:00:00.660) 0:00:10.236 ****** 2025-09-19 16:56:09.899415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899474 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899490 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899552 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899563 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899598 | orchestrator | 2025-09-19 16:56:09.899609 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-19 16:56:09.899620 | orchestrator | Friday 19 September 2025 16:55:23 +0000 (0:00:01.346) 0:00:11.583 ****** 2025-09-19 16:56:09.899632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899674 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899762 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899800 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899819 | orchestrator | 2025-09-19 16:56:09.899849 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-19 16:56:09.899861 | orchestrator | Friday 19 September 2025 16:55:25 +0000 (0:00:02.568) 0:00:14.151 ****** 2025-09-19 16:56:09.899873 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:09.899884 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:09.899895 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:09.899905 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:09.899916 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:09.899927 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:09.899938 | orchestrator | 2025-09-19 16:56:09.899948 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-19 16:56:09.899959 | orchestrator | Friday 19 September 2025 16:55:26 +0000 (0:00:01.126) 0:00:15.278 ****** 2025-09-19 16:56:09.899971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.899994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900010 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900038 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900132 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-19 16:56:09.900155 | orchestrator | 2025-09-19 16:56:09.900166 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900177 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:02.375) 0:00:17.656 ****** 2025-09-19 16:56:09.900189 | orchestrator | 2025-09-19 16:56:09.900200 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900211 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.361) 0:00:18.017 ****** 2025-09-19 16:56:09.900221 | orchestrator | 2025-09-19 16:56:09.900233 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900243 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.124) 0:00:18.142 ****** 2025-09-19 16:56:09.900254 | orchestrator | 2025-09-19 16:56:09.900265 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900276 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.140) 0:00:18.283 ****** 2025-09-19 16:56:09.900287 | orchestrator | 2025-09-19 16:56:09.900298 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900309 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.126) 0:00:18.409 ****** 2025-09-19 16:56:09.900319 | orchestrator | 2025-09-19 16:56:09.900330 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-19 16:56:09.900341 | orchestrator | Friday 19 September 2025 16:55:30 +0000 (0:00:00.120) 0:00:18.530 ****** 2025-09-19 16:56:09.900352 | orchestrator | 2025-09-19 16:56:09.900363 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-19 16:56:09.900373 | orchestrator | Friday 19 September 2025 16:55:30 +0000 (0:00:00.133) 0:00:18.663 ****** 2025-09-19 16:56:09.900384 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:09.900395 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:09.900406 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:09.900417 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:09.900428 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:09.900438 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:09.900449 | orchestrator | 2025-09-19 16:56:09.900460 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-19 16:56:09.900471 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:05.969) 0:00:24.632 ****** 2025-09-19 16:56:09.900488 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:09.900499 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:09.900510 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:09.900521 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:09.900531 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:09.900542 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:09.900553 | orchestrator | 2025-09-19 16:56:09.900564 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 16:56:09.900575 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:01.307) 0:00:25.940 ****** 2025-09-19 16:56:09.900585 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:09.900596 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:09.900607 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:09.900618 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:09.900629 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:09.900640 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:09.900650 | orchestrator | 2025-09-19 16:56:09.900661 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-19 16:56:09.900672 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:04.726) 0:00:30.666 ****** 2025-09-19 16:56:09.900683 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-19 16:56:09.900699 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-19 16:56:09.900711 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-19 16:56:09.900722 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-19 16:56:09.900732 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-19 16:56:09.900749 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-19 16:56:09.900760 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-19 16:56:09.900804 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-19 16:56:09.900816 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-19 16:56:09.900827 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-19 16:56:09.900853 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-19 16:56:09.900864 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-19 16:56:09.900875 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900885 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900896 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900907 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900917 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900928 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-19 16:56:09.900939 | orchestrator | 2025-09-19 16:56:09.900950 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-19 16:56:09.900969 | orchestrator | Friday 19 September 2025 16:55:49 +0000 (0:00:07.456) 0:00:38.123 ****** 2025-09-19 16:56:09.900980 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-19 16:56:09.900990 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:09.901001 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-19 16:56:09.901012 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:09.901023 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-19 16:56:09.901034 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:09.901045 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-19 16:56:09.901055 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-19 16:56:09.901066 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-19 16:56:09.901077 | orchestrator | 2025-09-19 16:56:09.901088 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-19 16:56:09.901099 | orchestrator | Friday 19 September 2025 16:55:52 +0000 (0:00:03.221) 0:00:41.344 ****** 2025-09-19 16:56:09.901110 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-19 16:56:09.901120 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-19 16:56:09.901131 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:09.901142 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:09.901153 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-19 16:56:09.901163 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:09.901174 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-19 16:56:09.901185 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-19 16:56:09.901196 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-19 16:56:09.901207 | orchestrator | 2025-09-19 16:56:09.901218 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-19 16:56:09.901229 | orchestrator | Friday 19 September 2025 16:55:56 +0000 (0:00:03.467) 0:00:44.812 ****** 2025-09-19 16:56:09.901240 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:09.901250 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:09.901261 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:09.901272 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:09.901282 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:09.901293 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:09.901304 | orchestrator | 2025-09-19 16:56:09.901315 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:56:09.901326 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 16:56:09.901337 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 16:56:09.901354 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 16:56:09.901366 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 16:56:09.901377 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 16:56:09.901394 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 16:56:09.901406 | orchestrator | 2025-09-19 16:56:09.901417 | orchestrator | 2025-09-19 16:56:09.901428 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:56:09.901439 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:09.596) 0:00:54.408 ****** 2025-09-19 16:56:09.901450 | orchestrator | =============================================================================== 2025-09-19 16:56:09.901467 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 14.32s 2025-09-19 16:56:09.901478 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.46s 2025-09-19 16:56:09.901488 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 5.97s 2025-09-19 16:56:09.901499 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.47s 2025-09-19 16:56:09.901510 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.22s 2025-09-19 16:56:09.901520 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.57s 2025-09-19 16:56:09.901531 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.40s 2025-09-19 16:56:09.901542 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.38s 2025-09-19 16:56:09.901552 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.03s 2025-09-19 16:56:09.901563 | orchestrator | module-load : Load modules ---------------------------------------------- 1.79s 2025-09-19 16:56:09.901574 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.35s 2025-09-19 16:56:09.901585 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.31s 2025-09-19 16:56:09.901595 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.13s 2025-09-19 16:56:09.901606 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.09s 2025-09-19 16:56:09.901617 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.01s 2025-09-19 16:56:09.901627 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-09-19 16:56:09.901638 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.86s 2025-09-19 16:56:09.901649 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.66s 2025-09-19 16:56:09.901748 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:09.901763 | orchestrator | 2025-09-19 16:56:09 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:09.901774 | orchestrator | 2025-09-19 16:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:12.929801 | orchestrator | 2025-09-19 16:56:12 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:12.930008 | orchestrator | 2025-09-19 16:56:12 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:12.930882 | orchestrator | 2025-09-19 16:56:12 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:12.931400 | orchestrator | 2025-09-19 16:56:12 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:12.932114 | orchestrator | 2025-09-19 16:56:12 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:12.933289 | orchestrator | 2025-09-19 16:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:15.958243 | orchestrator | 2025-09-19 16:56:15 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:15.958359 | orchestrator | 2025-09-19 16:56:15 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:15.958374 | orchestrator | 2025-09-19 16:56:15 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:15.958385 | orchestrator | 2025-09-19 16:56:15 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:15.958396 | orchestrator | 2025-09-19 16:56:15 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:15.958440 | orchestrator | 2025-09-19 16:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:18.991122 | orchestrator | 2025-09-19 16:56:18 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:18.991359 | orchestrator | 2025-09-19 16:56:18 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:18.992017 | orchestrator | 2025-09-19 16:56:18 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:18.992658 | orchestrator | 2025-09-19 16:56:18 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:18.993359 | orchestrator | 2025-09-19 16:56:18 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:18.993390 | orchestrator | 2025-09-19 16:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:22.244026 | orchestrator | 2025-09-19 16:56:22 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:22.244159 | orchestrator | 2025-09-19 16:56:22 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:22.244184 | orchestrator | 2025-09-19 16:56:22 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:22.244202 | orchestrator | 2025-09-19 16:56:22 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:22.244213 | orchestrator | 2025-09-19 16:56:22 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:22.244224 | orchestrator | 2025-09-19 16:56:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:25.341472 | orchestrator | 2025-09-19 16:56:25 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:25.343567 | orchestrator | 2025-09-19 16:56:25 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:25.344105 | orchestrator | 2025-09-19 16:56:25 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:25.344782 | orchestrator | 2025-09-19 16:56:25 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:25.345499 | orchestrator | 2025-09-19 16:56:25 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:25.345532 | orchestrator | 2025-09-19 16:56:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:28.694005 | orchestrator | 2025-09-19 16:56:28 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:28.694226 | orchestrator | 2025-09-19 16:56:28 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:28.696656 | orchestrator | 2025-09-19 16:56:28 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:28.697139 | orchestrator | 2025-09-19 16:56:28 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:28.697904 | orchestrator | 2025-09-19 16:56:28 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:28.697930 | orchestrator | 2025-09-19 16:56:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:31.771464 | orchestrator | 2025-09-19 16:56:31 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state STARTED 2025-09-19 16:56:31.772361 | orchestrator | 2025-09-19 16:56:31 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:31.772679 | orchestrator | 2025-09-19 16:56:31 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:31.773241 | orchestrator | 2025-09-19 16:56:31 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:31.773995 | orchestrator | 2025-09-19 16:56:31 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:31.774104 | orchestrator | 2025-09-19 16:56:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:34.802952 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task d15c1c47-24a4-4dcc-ab45-0004468d2c37 is in state SUCCESS 2025-09-19 16:56:34.804470 | orchestrator | 2025-09-19 16:56:34.804511 | orchestrator | 2025-09-19 16:56:34.804525 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-19 16:56:34.804536 | orchestrator | 2025-09-19 16:56:34.804548 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-19 16:56:34.804559 | orchestrator | Friday 19 September 2025 16:52:48 +0000 (0:00:00.202) 0:00:00.202 ****** 2025-09-19 16:56:34.804595 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.804607 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.804617 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.804628 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.804639 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.804649 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.804660 | orchestrator | 2025-09-19 16:56:34.804671 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-19 16:56:34.804698 | orchestrator | Friday 19 September 2025 16:52:48 +0000 (0:00:00.748) 0:00:00.950 ****** 2025-09-19 16:56:34.804709 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.804721 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.804731 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.804759 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.804769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.804780 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.804791 | orchestrator | 2025-09-19 16:56:34.804802 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-19 16:56:34.804813 | orchestrator | Friday 19 September 2025 16:52:49 +0000 (0:00:00.652) 0:00:01.603 ****** 2025-09-19 16:56:34.804823 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.804882 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.804893 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.804904 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.804915 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.804925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.804936 | orchestrator | 2025-09-19 16:56:34.804947 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-19 16:56:34.804958 | orchestrator | Friday 19 September 2025 16:52:50 +0000 (0:00:00.747) 0:00:02.351 ****** 2025-09-19 16:56:34.804969 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.804980 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.804990 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.805001 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.805012 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.805022 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.805033 | orchestrator | 2025-09-19 16:56:34.805044 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-19 16:56:34.805056 | orchestrator | Friday 19 September 2025 16:52:53 +0000 (0:00:02.828) 0:00:05.179 ****** 2025-09-19 16:56:34.805069 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.805081 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.805093 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.805105 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.805117 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.805129 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.805142 | orchestrator | 2025-09-19 16:56:34.805154 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-19 16:56:34.805166 | orchestrator | Friday 19 September 2025 16:52:53 +0000 (0:00:00.845) 0:00:06.025 ****** 2025-09-19 16:56:34.805203 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.805216 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.805228 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.805240 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.805250 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.805261 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.805271 | orchestrator | 2025-09-19 16:56:34.805282 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-19 16:56:34.805293 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:01.322) 0:00:07.347 ****** 2025-09-19 16:56:34.805304 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.805314 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.805325 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.805336 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.805346 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.805357 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.805368 | orchestrator | 2025-09-19 16:56:34.805379 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-19 16:56:34.805389 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:00.638) 0:00:07.988 ****** 2025-09-19 16:56:34.805400 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.805411 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.805422 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.805432 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.805443 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.805453 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.805464 | orchestrator | 2025-09-19 16:56:34.805475 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-19 16:56:34.805486 | orchestrator | Friday 19 September 2025 16:52:56 +0000 (0:00:00.749) 0:00:08.737 ****** 2025-09-19 16:56:34.805497 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805508 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805519 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.805530 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805541 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805551 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.805562 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805573 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805583 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.805594 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805618 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805629 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.805640 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805651 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805662 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.805673 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 16:56:34.805683 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 16:56:34.805694 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.805705 | orchestrator | 2025-09-19 16:56:34.805716 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-19 16:56:34.805733 | orchestrator | Friday 19 September 2025 16:52:57 +0000 (0:00:00.918) 0:00:09.656 ****** 2025-09-19 16:56:34.805744 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.805763 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.805774 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.805785 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.805795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.805806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.805816 | orchestrator | 2025-09-19 16:56:34.805827 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-19 16:56:34.805856 | orchestrator | Friday 19 September 2025 16:52:59 +0000 (0:00:01.586) 0:00:11.242 ****** 2025-09-19 16:56:34.805866 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.805877 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.805888 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.805899 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.805909 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.805920 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.805930 | orchestrator | 2025-09-19 16:56:34.805941 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-19 16:56:34.805952 | orchestrator | Friday 19 September 2025 16:53:00 +0000 (0:00:00.847) 0:00:12.090 ****** 2025-09-19 16:56:34.805963 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.805974 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.805984 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.805995 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.806005 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.806065 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.806079 | orchestrator | 2025-09-19 16:56:34.806090 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-19 16:56:34.806101 | orchestrator | Friday 19 September 2025 16:53:06 +0000 (0:00:06.990) 0:00:19.081 ****** 2025-09-19 16:56:34.806112 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.806123 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.806133 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.806160 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.806171 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.806182 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.806192 | orchestrator | 2025-09-19 16:56:34.806203 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-19 16:56:34.806214 | orchestrator | Friday 19 September 2025 16:53:09 +0000 (0:00:02.312) 0:00:21.393 ****** 2025-09-19 16:56:34.806225 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.806236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.806246 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.806257 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.806268 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.806278 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.806289 | orchestrator | 2025-09-19 16:56:34.806300 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-19 16:56:34.806313 | orchestrator | Friday 19 September 2025 16:53:12 +0000 (0:00:02.714) 0:00:24.108 ****** 2025-09-19 16:56:34.806324 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.806335 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.806346 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.806356 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.806367 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.806377 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.806388 | orchestrator | 2025-09-19 16:56:34.806399 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-19 16:56:34.806410 | orchestrator | Friday 19 September 2025 16:53:12 +0000 (0:00:00.834) 0:00:24.942 ****** 2025-09-19 16:56:34.806420 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-19 16:56:34.806432 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-19 16:56:34.806442 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-19 16:56:34.806461 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-19 16:56:34.806472 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-19 16:56:34.806482 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-19 16:56:34.806493 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-19 16:56:34.806504 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-19 16:56:34.806514 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-19 16:56:34.806525 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-19 16:56:34.806536 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-19 16:56:34.806547 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-19 16:56:34.806557 | orchestrator | 2025-09-19 16:56:34.806568 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-19 16:56:34.806580 | orchestrator | Friday 19 September 2025 16:53:15 +0000 (0:00:02.381) 0:00:27.324 ****** 2025-09-19 16:56:34.806590 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.806601 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.806612 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.806623 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.806634 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.806644 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.806655 | orchestrator | 2025-09-19 16:56:34.806674 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-19 16:56:34.806685 | orchestrator | 2025-09-19 16:56:34.806696 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-19 16:56:34.806707 | orchestrator | Friday 19 September 2025 16:53:17 +0000 (0:00:01.832) 0:00:29.156 ****** 2025-09-19 16:56:34.806717 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.806728 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.806739 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.806750 | orchestrator | 2025-09-19 16:56:34.806761 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-19 16:56:34.806771 | orchestrator | Friday 19 September 2025 16:53:18 +0000 (0:00:01.288) 0:00:30.445 ****** 2025-09-19 16:56:34.806782 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.806793 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.806810 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.806820 | orchestrator | 2025-09-19 16:56:34.806851 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-19 16:56:34.806862 | orchestrator | Friday 19 September 2025 16:53:19 +0000 (0:00:01.582) 0:00:32.028 ****** 2025-09-19 16:56:34.806873 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.806884 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.806895 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.806906 | orchestrator | 2025-09-19 16:56:34.806916 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-19 16:56:34.806927 | orchestrator | Friday 19 September 2025 16:53:20 +0000 (0:00:01.023) 0:00:33.051 ****** 2025-09-19 16:56:34.806938 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.806949 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.806960 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.806970 | orchestrator | 2025-09-19 16:56:34.806981 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-19 16:56:34.806992 | orchestrator | Friday 19 September 2025 16:53:22 +0000 (0:00:01.085) 0:00:34.137 ****** 2025-09-19 16:56:34.807003 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.807013 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807024 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807035 | orchestrator | 2025-09-19 16:56:34.807046 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-19 16:56:34.807057 | orchestrator | Friday 19 September 2025 16:53:22 +0000 (0:00:00.351) 0:00:34.489 ****** 2025-09-19 16:56:34.807068 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.807079 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.807097 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.807108 | orchestrator | 2025-09-19 16:56:34.807118 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-19 16:56:34.807129 | orchestrator | Friday 19 September 2025 16:53:23 +0000 (0:00:00.736) 0:00:35.225 ****** 2025-09-19 16:56:34.807140 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.807151 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.807161 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.807172 | orchestrator | 2025-09-19 16:56:34.807183 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-19 16:56:34.807194 | orchestrator | Friday 19 September 2025 16:53:24 +0000 (0:00:01.721) 0:00:36.947 ****** 2025-09-19 16:56:34.807205 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:56:34.807215 | orchestrator | 2025-09-19 16:56:34.807226 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-19 16:56:34.807237 | orchestrator | Friday 19 September 2025 16:53:25 +0000 (0:00:01.012) 0:00:37.959 ****** 2025-09-19 16:56:34.807247 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.807258 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.807269 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.807279 | orchestrator | 2025-09-19 16:56:34.807290 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-19 16:56:34.807301 | orchestrator | Friday 19 September 2025 16:53:28 +0000 (0:00:03.061) 0:00:41.021 ****** 2025-09-19 16:56:34.807312 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807323 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807333 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.807344 | orchestrator | 2025-09-19 16:56:34.807355 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-19 16:56:34.807366 | orchestrator | Friday 19 September 2025 16:53:30 +0000 (0:00:01.088) 0:00:42.109 ****** 2025-09-19 16:56:34.807376 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807387 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.807397 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807408 | orchestrator | 2025-09-19 16:56:34.807419 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-19 16:56:34.807430 | orchestrator | Friday 19 September 2025 16:53:31 +0000 (0:00:01.521) 0:00:43.630 ****** 2025-09-19 16:56:34.807440 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807451 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807462 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.807473 | orchestrator | 2025-09-19 16:56:34.807484 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-19 16:56:34.807495 | orchestrator | Friday 19 September 2025 16:53:33 +0000 (0:00:02.252) 0:00:45.883 ****** 2025-09-19 16:56:34.807505 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.807527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807537 | orchestrator | 2025-09-19 16:56:34.807548 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-19 16:56:34.807559 | orchestrator | Friday 19 September 2025 16:53:34 +0000 (0:00:00.801) 0:00:46.684 ****** 2025-09-19 16:56:34.807569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.807580 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.807591 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.807601 | orchestrator | 2025-09-19 16:56:34.807612 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-19 16:56:34.807623 | orchestrator | Friday 19 September 2025 16:53:35 +0000 (0:00:00.423) 0:00:47.108 ****** 2025-09-19 16:56:34.807634 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.807645 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.807655 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.807666 | orchestrator | 2025-09-19 16:56:34.807690 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-19 16:56:34.807701 | orchestrator | Friday 19 September 2025 16:53:37 +0000 (0:00:02.211) 0:00:49.320 ****** 2025-09-19 16:56:34.807712 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 16:56:34.807724 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 16:56:34.807740 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-19 16:56:34.807751 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 16:56:34.807762 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 16:56:34.807773 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-19 16:56:34.807784 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 16:56:34.807794 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 16:56:34.807805 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-19 16:56:34.807893 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 16:56:34.807907 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 16:56:34.807918 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-19 16:56:34.807928 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 16:56:34.807939 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 16:56:34.807950 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-19 16:56:34.807974 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.807985 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.807996 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808007 | orchestrator | 2025-09-19 16:56:34.808018 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-19 16:56:34.808029 | orchestrator | Friday 19 September 2025 16:54:32 +0000 (0:00:55.209) 0:01:44.529 ****** 2025-09-19 16:56:34.808040 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.808050 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.808061 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.808072 | orchestrator | 2025-09-19 16:56:34.808083 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-19 16:56:34.808094 | orchestrator | Friday 19 September 2025 16:54:32 +0000 (0:00:00.271) 0:01:44.801 ****** 2025-09-19 16:56:34.808104 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808115 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808126 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808137 | orchestrator | 2025-09-19 16:56:34.808148 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-19 16:56:34.808159 | orchestrator | Friday 19 September 2025 16:54:33 +0000 (0:00:01.160) 0:01:45.961 ****** 2025-09-19 16:56:34.808178 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808189 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808200 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808211 | orchestrator | 2025-09-19 16:56:34.808222 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-19 16:56:34.808232 | orchestrator | Friday 19 September 2025 16:54:34 +0000 (0:00:01.094) 0:01:47.056 ****** 2025-09-19 16:56:34.808241 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808251 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808260 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808270 | orchestrator | 2025-09-19 16:56:34.808279 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-19 16:56:34.808289 | orchestrator | Friday 19 September 2025 16:55:01 +0000 (0:00:26.491) 0:02:13.547 ****** 2025-09-19 16:56:34.808298 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.808308 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.808318 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808327 | orchestrator | 2025-09-19 16:56:34.808337 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-19 16:56:34.808347 | orchestrator | Friday 19 September 2025 16:55:02 +0000 (0:00:00.689) 0:02:14.237 ****** 2025-09-19 16:56:34.808356 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.808366 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808375 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.808385 | orchestrator | 2025-09-19 16:56:34.808401 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-19 16:56:34.808411 | orchestrator | Friday 19 September 2025 16:55:02 +0000 (0:00:00.687) 0:02:14.924 ****** 2025-09-19 16:56:34.808420 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808430 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808439 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808449 | orchestrator | 2025-09-19 16:56:34.808458 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-19 16:56:34.808468 | orchestrator | Friday 19 September 2025 16:55:03 +0000 (0:00:00.641) 0:02:15.565 ****** 2025-09-19 16:56:34.808478 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808487 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.808497 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.808506 | orchestrator | 2025-09-19 16:56:34.808516 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-19 16:56:34.808531 | orchestrator | Friday 19 September 2025 16:55:04 +0000 (0:00:00.854) 0:02:16.419 ****** 2025-09-19 16:56:34.808541 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.808551 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808560 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.808570 | orchestrator | 2025-09-19 16:56:34.808579 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-19 16:56:34.808589 | orchestrator | Friday 19 September 2025 16:55:04 +0000 (0:00:00.306) 0:02:16.726 ****** 2025-09-19 16:56:34.808599 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808608 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808618 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808627 | orchestrator | 2025-09-19 16:56:34.808637 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-19 16:56:34.808647 | orchestrator | Friday 19 September 2025 16:55:05 +0000 (0:00:00.601) 0:02:17.328 ****** 2025-09-19 16:56:34.808656 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808666 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808675 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808685 | orchestrator | 2025-09-19 16:56:34.808695 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-19 16:56:34.808704 | orchestrator | Friday 19 September 2025 16:55:05 +0000 (0:00:00.615) 0:02:17.943 ****** 2025-09-19 16:56:34.808714 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808730 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808740 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808750 | orchestrator | 2025-09-19 16:56:34.808759 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-19 16:56:34.808769 | orchestrator | Friday 19 September 2025 16:55:06 +0000 (0:00:00.987) 0:02:18.931 ****** 2025-09-19 16:56:34.808779 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:56:34.808788 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:56:34.808798 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:56:34.808807 | orchestrator | 2025-09-19 16:56:34.808817 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-19 16:56:34.808827 | orchestrator | Friday 19 September 2025 16:55:07 +0000 (0:00:01.102) 0:02:20.033 ****** 2025-09-19 16:56:34.808853 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.808863 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.808873 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.808882 | orchestrator | 2025-09-19 16:56:34.808892 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-19 16:56:34.808901 | orchestrator | Friday 19 September 2025 16:55:08 +0000 (0:00:00.279) 0:02:20.312 ****** 2025-09-19 16:56:34.808911 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.808920 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.808930 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.808939 | orchestrator | 2025-09-19 16:56:34.808949 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-19 16:56:34.808958 | orchestrator | Friday 19 September 2025 16:55:08 +0000 (0:00:00.369) 0:02:20.682 ****** 2025-09-19 16:56:34.808968 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.808977 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.808987 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.808996 | orchestrator | 2025-09-19 16:56:34.809006 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-19 16:56:34.809015 | orchestrator | Friday 19 September 2025 16:55:09 +0000 (0:00:01.086) 0:02:21.768 ****** 2025-09-19 16:56:34.809025 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.809034 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.809044 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.809053 | orchestrator | 2025-09-19 16:56:34.809063 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-19 16:56:34.809073 | orchestrator | Friday 19 September 2025 16:55:10 +0000 (0:00:00.703) 0:02:22.472 ****** 2025-09-19 16:56:34.809083 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 16:56:34.809092 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 16:56:34.809102 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-19 16:56:34.809112 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 16:56:34.809121 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 16:56:34.809131 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-19 16:56:34.809140 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 16:56:34.809149 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 16:56:34.809159 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-19 16:56:34.809174 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-19 16:56:34.809184 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 16:56:34.809200 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 16:56:34.809209 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-19 16:56:34.809219 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 16:56:34.809228 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 16:56:34.809238 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-19 16:56:34.809247 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 16:56:34.809257 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 16:56:34.809267 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-19 16:56:34.809276 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-19 16:56:34.809286 | orchestrator | 2025-09-19 16:56:34.809295 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-19 16:56:34.809305 | orchestrator | 2025-09-19 16:56:34.809315 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-19 16:56:34.809324 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:03.160) 0:02:25.632 ****** 2025-09-19 16:56:34.809334 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.809343 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.809353 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.809363 | orchestrator | 2025-09-19 16:56:34.809372 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-19 16:56:34.809382 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:00.433) 0:02:26.066 ****** 2025-09-19 16:56:34.809392 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.809401 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.809411 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.809420 | orchestrator | 2025-09-19 16:56:34.809430 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-19 16:56:34.809440 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.602) 0:02:26.668 ****** 2025-09-19 16:56:34.809449 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.809459 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.809469 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.809478 | orchestrator | 2025-09-19 16:56:34.809488 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-19 16:56:34.809498 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.325) 0:02:26.993 ****** 2025-09-19 16:56:34.809507 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:56:34.809517 | orchestrator | 2025-09-19 16:56:34.809526 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-19 16:56:34.809536 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:00.485) 0:02:27.478 ****** 2025-09-19 16:56:34.809546 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.809555 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.809565 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.809574 | orchestrator | 2025-09-19 16:56:34.809584 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-19 16:56:34.809594 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:00.385) 0:02:27.864 ****** 2025-09-19 16:56:34.809603 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.809613 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.810299 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.810324 | orchestrator | 2025-09-19 16:56:34.810334 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-19 16:56:34.810343 | orchestrator | Friday 19 September 2025 16:55:16 +0000 (0:00:00.410) 0:02:28.275 ****** 2025-09-19 16:56:34.810362 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.810372 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.810381 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.810391 | orchestrator | 2025-09-19 16:56:34.810400 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-19 16:56:34.810410 | orchestrator | Friday 19 September 2025 16:55:16 +0000 (0:00:00.260) 0:02:28.535 ****** 2025-09-19 16:56:34.810419 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.810429 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.810439 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.810448 | orchestrator | 2025-09-19 16:56:34.810458 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-19 16:56:34.810468 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:00.676) 0:02:29.212 ****** 2025-09-19 16:56:34.810477 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.810487 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.810496 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.810505 | orchestrator | 2025-09-19 16:56:34.810519 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-19 16:56:34.810529 | orchestrator | Friday 19 September 2025 16:55:18 +0000 (0:00:01.370) 0:02:30.582 ****** 2025-09-19 16:56:34.810538 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.810548 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.810558 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.810567 | orchestrator | 2025-09-19 16:56:34.810577 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-19 16:56:34.810586 | orchestrator | Friday 19 September 2025 16:55:19 +0000 (0:00:01.377) 0:02:31.960 ****** 2025-09-19 16:56:34.810596 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:56:34.810605 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:56:34.810614 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:56:34.810624 | orchestrator | 2025-09-19 16:56:34.810643 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 16:56:34.810654 | orchestrator | 2025-09-19 16:56:34.810663 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 16:56:34.810673 | orchestrator | Friday 19 September 2025 16:55:31 +0000 (0:00:12.027) 0:02:43.988 ****** 2025-09-19 16:56:34.810682 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.810692 | orchestrator | 2025-09-19 16:56:34.810701 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 16:56:34.810711 | orchestrator | Friday 19 September 2025 16:55:32 +0000 (0:00:00.860) 0:02:44.848 ****** 2025-09-19 16:56:34.810720 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.810730 | orchestrator | 2025-09-19 16:56:34.810739 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 16:56:34.810749 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:00.511) 0:02:45.359 ****** 2025-09-19 16:56:34.810758 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 16:56:34.810768 | orchestrator | 2025-09-19 16:56:34.810778 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 16:56:34.810787 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:00.584) 0:02:45.944 ****** 2025-09-19 16:56:34.810797 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.810806 | orchestrator | 2025-09-19 16:56:34.810816 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 16:56:34.810826 | orchestrator | Friday 19 September 2025 16:55:34 +0000 (0:00:00.836) 0:02:46.781 ****** 2025-09-19 16:56:34.810906 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.810924 | orchestrator | 2025-09-19 16:56:34.810935 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 16:56:34.810945 | orchestrator | Friday 19 September 2025 16:55:35 +0000 (0:00:00.533) 0:02:47.315 ****** 2025-09-19 16:56:34.810954 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 16:56:34.810972 | orchestrator | 2025-09-19 16:56:34.810982 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 16:56:34.810991 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:01.506) 0:02:48.821 ****** 2025-09-19 16:56:34.811001 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 16:56:34.811010 | orchestrator | 2025-09-19 16:56:34.811020 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 16:56:34.811029 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:00.779) 0:02:49.601 ****** 2025-09-19 16:56:34.811039 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.811048 | orchestrator | 2025-09-19 16:56:34.811058 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 16:56:34.811096 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:00.340) 0:02:49.942 ****** 2025-09-19 16:56:34.811106 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.811115 | orchestrator | 2025-09-19 16:56:34.811125 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-19 16:56:34.811134 | orchestrator | 2025-09-19 16:56:34.811144 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-19 16:56:34.811153 | orchestrator | Friday 19 September 2025 16:55:38 +0000 (0:00:00.670) 0:02:50.613 ****** 2025-09-19 16:56:34.811163 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.811172 | orchestrator | 2025-09-19 16:56:34.811182 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-19 16:56:34.811191 | orchestrator | Friday 19 September 2025 16:55:38 +0000 (0:00:00.136) 0:02:50.749 ****** 2025-09-19 16:56:34.811201 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:56:34.811210 | orchestrator | 2025-09-19 16:56:34.811220 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-19 16:56:34.811230 | orchestrator | Friday 19 September 2025 16:55:38 +0000 (0:00:00.232) 0:02:50.981 ****** 2025-09-19 16:56:34.811239 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.811248 | orchestrator | 2025-09-19 16:56:34.811258 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-19 16:56:34.811267 | orchestrator | Friday 19 September 2025 16:55:40 +0000 (0:00:01.231) 0:02:52.213 ****** 2025-09-19 16:56:34.811277 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.811286 | orchestrator | 2025-09-19 16:56:34.811296 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-19 16:56:34.811305 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:01.392) 0:02:53.605 ****** 2025-09-19 16:56:34.811315 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.811324 | orchestrator | 2025-09-19 16:56:34.811334 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-19 16:56:34.811343 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:00.742) 0:02:54.348 ****** 2025-09-19 16:56:34.811353 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.811362 | orchestrator | 2025-09-19 16:56:34.811372 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-19 16:56:34.811382 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:00.470) 0:02:54.818 ****** 2025-09-19 16:56:34.811391 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.811400 | orchestrator | 2025-09-19 16:56:34.811407 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-19 16:56:34.811423 | orchestrator | Friday 19 September 2025 16:55:49 +0000 (0:00:06.616) 0:03:01.434 ****** 2025-09-19 16:56:34.811431 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.811439 | orchestrator | 2025-09-19 16:56:34.811447 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-19 16:56:34.811455 | orchestrator | Friday 19 September 2025 16:56:01 +0000 (0:00:12.270) 0:03:13.705 ****** 2025-09-19 16:56:34.811462 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.811470 | orchestrator | 2025-09-19 16:56:34.811478 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-19 16:56:34.811491 | orchestrator | 2025-09-19 16:56:34.811499 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-19 16:56:34.811513 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:00.567) 0:03:14.273 ****** 2025-09-19 16:56:34.811521 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.811529 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.811537 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.811544 | orchestrator | 2025-09-19 16:56:34.811552 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-19 16:56:34.811560 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:00.332) 0:03:14.605 ****** 2025-09-19 16:56:34.811568 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811576 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.811583 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.811591 | orchestrator | 2025-09-19 16:56:34.811599 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-19 16:56:34.811607 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:00.460) 0:03:15.065 ****** 2025-09-19 16:56:34.811614 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:56:34.811622 | orchestrator | 2025-09-19 16:56:34.811630 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-19 16:56:34.811638 | orchestrator | Friday 19 September 2025 16:56:03 +0000 (0:00:00.947) 0:03:16.012 ****** 2025-09-19 16:56:34.811646 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811653 | orchestrator | 2025-09-19 16:56:34.811661 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-19 16:56:34.811669 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:00.216) 0:03:16.229 ****** 2025-09-19 16:56:34.811677 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811685 | orchestrator | 2025-09-19 16:56:34.811692 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-19 16:56:34.811700 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:00.264) 0:03:16.493 ****** 2025-09-19 16:56:34.811708 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811716 | orchestrator | 2025-09-19 16:56:34.811724 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-19 16:56:34.811731 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:00.209) 0:03:16.703 ****** 2025-09-19 16:56:34.811739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811747 | orchestrator | 2025-09-19 16:56:34.811755 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-19 16:56:34.811763 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:00.254) 0:03:16.957 ****** 2025-09-19 16:56:34.811770 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811778 | orchestrator | 2025-09-19 16:56:34.811786 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-19 16:56:34.811794 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.204) 0:03:17.161 ****** 2025-09-19 16:56:34.811801 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811809 | orchestrator | 2025-09-19 16:56:34.811817 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-19 16:56:34.811825 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.223) 0:03:17.385 ****** 2025-09-19 16:56:34.811849 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811858 | orchestrator | 2025-09-19 16:56:34.811866 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-19 16:56:34.811873 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.197) 0:03:17.582 ****** 2025-09-19 16:56:34.811881 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811889 | orchestrator | 2025-09-19 16:56:34.811897 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-19 16:56:34.811905 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.230) 0:03:17.812 ****** 2025-09-19 16:56:34.811918 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811926 | orchestrator | 2025-09-19 16:56:34.811934 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-19 16:56:34.811942 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.222) 0:03:18.034 ****** 2025-09-19 16:56:34.811950 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-19 16:56:34.811958 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-19 16:56:34.811966 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.811973 | orchestrator | 2025-09-19 16:56:34.811981 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-19 16:56:34.811989 | orchestrator | Friday 19 September 2025 16:56:06 +0000 (0:00:00.746) 0:03:18.781 ****** 2025-09-19 16:56:34.811997 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812005 | orchestrator | 2025-09-19 16:56:34.812012 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-19 16:56:34.812020 | orchestrator | Friday 19 September 2025 16:56:06 +0000 (0:00:00.225) 0:03:19.006 ****** 2025-09-19 16:56:34.812028 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812036 | orchestrator | 2025-09-19 16:56:34.812044 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-19 16:56:34.812052 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:00.221) 0:03:19.228 ****** 2025-09-19 16:56:34.812060 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812067 | orchestrator | 2025-09-19 16:56:34.812075 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-19 16:56:34.812087 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:00.271) 0:03:19.499 ****** 2025-09-19 16:56:34.812096 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812103 | orchestrator | 2025-09-19 16:56:34.812111 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-19 16:56:34.812119 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:00.260) 0:03:19.759 ****** 2025-09-19 16:56:34.812127 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812135 | orchestrator | 2025-09-19 16:56:34.812143 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-19 16:56:34.812151 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:00.226) 0:03:19.986 ****** 2025-09-19 16:56:34.812159 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812167 | orchestrator | 2025-09-19 16:56:34.812175 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-19 16:56:34.812188 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:00.238) 0:03:20.225 ****** 2025-09-19 16:56:34.812196 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812204 | orchestrator | 2025-09-19 16:56:34.812212 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-19 16:56:34.812219 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:00.211) 0:03:20.436 ****** 2025-09-19 16:56:34.812227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812235 | orchestrator | 2025-09-19 16:56:34.812243 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-19 16:56:34.812251 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:00.221) 0:03:20.658 ****** 2025-09-19 16:56:34.812258 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812266 | orchestrator | 2025-09-19 16:56:34.812274 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-19 16:56:34.812282 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:00.393) 0:03:21.051 ****** 2025-09-19 16:56:34.812290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812297 | orchestrator | 2025-09-19 16:56:34.812305 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-19 16:56:34.812313 | orchestrator | Friday 19 September 2025 16:56:09 +0000 (0:00:00.224) 0:03:21.276 ****** 2025-09-19 16:56:34.812321 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812328 | orchestrator | 2025-09-19 16:56:34.812342 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-19 16:56:34.812350 | orchestrator | Friday 19 September 2025 16:56:09 +0000 (0:00:00.233) 0:03:21.510 ****** 2025-09-19 16:56:34.812367 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-19 16:56:34.812375 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-19 16:56:34.812383 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-19 16:56:34.812391 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-19 16:56:34.812398 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812406 | orchestrator | 2025-09-19 16:56:34.812414 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-19 16:56:34.812428 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:01.042) 0:03:22.552 ****** 2025-09-19 16:56:34.812437 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812444 | orchestrator | 2025-09-19 16:56:34.812452 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-19 16:56:34.812460 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:00.259) 0:03:22.811 ****** 2025-09-19 16:56:34.812468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812476 | orchestrator | 2025-09-19 16:56:34.812483 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-19 16:56:34.812491 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:00.227) 0:03:23.039 ****** 2025-09-19 16:56:34.812499 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812507 | orchestrator | 2025-09-19 16:56:34.812514 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-19 16:56:34.812522 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.234) 0:03:23.274 ****** 2025-09-19 16:56:34.812530 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812538 | orchestrator | 2025-09-19 16:56:34.812545 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-19 16:56:34.812553 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.195) 0:03:23.469 ****** 2025-09-19 16:56:34.812561 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-19 16:56:34.812569 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-19 16:56:34.812576 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812584 | orchestrator | 2025-09-19 16:56:34.812592 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-19 16:56:34.812600 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.284) 0:03:23.753 ****** 2025-09-19 16:56:34.812607 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.812615 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.812623 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.812631 | orchestrator | 2025-09-19 16:56:34.812639 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-19 16:56:34.812646 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.279) 0:03:24.033 ****** 2025-09-19 16:56:34.812654 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.812662 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.812669 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.812677 | orchestrator | 2025-09-19 16:56:34.812685 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-19 16:56:34.812693 | orchestrator | 2025-09-19 16:56:34.812701 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-19 16:56:34.812708 | orchestrator | Friday 19 September 2025 16:56:12 +0000 (0:00:00.989) 0:03:25.022 ****** 2025-09-19 16:56:34.812716 | orchestrator | ok: [testbed-manager] 2025-09-19 16:56:34.812724 | orchestrator | 2025-09-19 16:56:34.812732 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-19 16:56:34.812743 | orchestrator | Friday 19 September 2025 16:56:13 +0000 (0:00:00.149) 0:03:25.172 ****** 2025-09-19 16:56:34.812767 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-19 16:56:34.812775 | orchestrator | 2025-09-19 16:56:34.812782 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-19 16:56:34.812790 | orchestrator | Friday 19 September 2025 16:56:13 +0000 (0:00:00.192) 0:03:25.364 ****** 2025-09-19 16:56:34.812798 | orchestrator | changed: [testbed-manager] 2025-09-19 16:56:34.812806 | orchestrator | 2025-09-19 16:56:34.812814 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-19 16:56:34.812821 | orchestrator | 2025-09-19 16:56:34.812844 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-19 16:56:34.812857 | orchestrator | Friday 19 September 2025 16:56:18 +0000 (0:00:05.389) 0:03:30.754 ****** 2025-09-19 16:56:34.812865 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:56:34.812873 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:56:34.812881 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:56:34.812889 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:56:34.812897 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:56:34.812904 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:56:34.812912 | orchestrator | 2025-09-19 16:56:34.812920 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-19 16:56:34.812928 | orchestrator | Friday 19 September 2025 16:56:19 +0000 (0:00:00.737) 0:03:31.491 ****** 2025-09-19 16:56:34.812935 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 16:56:34.812943 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 16:56:34.812951 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-19 16:56:34.812959 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 16:56:34.812966 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 16:56:34.812974 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-19 16:56:34.812982 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 16:56:34.812990 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 16:56:34.812997 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-19 16:56:34.813008 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 16:56:34.813022 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 16:56:34.813036 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-19 16:56:34.813048 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 16:56:34.813060 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 16:56:34.813072 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 16:56:34.813085 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-19 16:56:34.813097 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 16:56:34.813109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-19 16:56:34.813122 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 16:56:34.813136 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 16:56:34.813149 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-19 16:56:34.813162 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 16:56:34.813179 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 16:56:34.813187 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-19 16:56:34.813195 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 16:56:34.813203 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 16:56:34.813211 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-19 16:56:34.813219 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 16:56:34.813227 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 16:56:34.813234 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-19 16:56:34.813242 | orchestrator | 2025-09-19 16:56:34.813250 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-19 16:56:34.813258 | orchestrator | Friday 19 September 2025 16:56:32 +0000 (0:00:12.751) 0:03:44.243 ****** 2025-09-19 16:56:34.813265 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.813273 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.813281 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.813288 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.813296 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.813304 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.813311 | orchestrator | 2025-09-19 16:56:34.813324 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-19 16:56:34.813332 | orchestrator | Friday 19 September 2025 16:56:32 +0000 (0:00:00.571) 0:03:44.814 ****** 2025-09-19 16:56:34.813340 | orchestrator | skipping: [testbed-node-3] 2025-09-19 16:56:34.813347 | orchestrator | skipping: [testbed-node-4] 2025-09-19 16:56:34.813355 | orchestrator | skipping: [testbed-node-5] 2025-09-19 16:56:34.813362 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:56:34.813370 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:56:34.813378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:56:34.813385 | orchestrator | 2025-09-19 16:56:34.813393 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:56:34.813407 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:56:34.813417 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-19 16:56:34.813426 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 16:56:34.813433 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 16:56:34.813441 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 16:56:34.813449 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 16:56:34.813457 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 16:56:34.813464 | orchestrator | 2025-09-19 16:56:34.813472 | orchestrator | 2025-09-19 16:56:34.813480 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:56:34.813488 | orchestrator | Friday 19 September 2025 16:56:33 +0000 (0:00:00.416) 0:03:45.230 ****** 2025-09-19 16:56:34.813495 | orchestrator | =============================================================================== 2025-09-19 16:56:34.813508 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.21s 2025-09-19 16:56:34.813516 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.49s 2025-09-19 16:56:34.813524 | orchestrator | Manage labels ---------------------------------------------------------- 12.75s 2025-09-19 16:56:34.813532 | orchestrator | kubectl : Install required packages ------------------------------------ 12.27s 2025-09-19 16:56:34.813539 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.03s 2025-09-19 16:56:34.813547 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.99s 2025-09-19 16:56:34.813555 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.62s 2025-09-19 16:56:34.813562 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.39s 2025-09-19 16:56:34.813570 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.16s 2025-09-19 16:56:34.813578 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.06s 2025-09-19 16:56:34.813586 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.83s 2025-09-19 16:56:34.813594 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.71s 2025-09-19 16:56:34.813601 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.38s 2025-09-19 16:56:34.813609 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 2.31s 2025-09-19 16:56:34.813616 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.25s 2025-09-19 16:56:34.813624 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.21s 2025-09-19 16:56:34.813632 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.83s 2025-09-19 16:56:34.813639 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.72s 2025-09-19 16:56:34.813647 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 1.59s 2025-09-19 16:56:34.813655 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 1.58s 2025-09-19 16:56:34.813662 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:34.813671 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:34.813678 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:34.813686 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:34.813816 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 0d3eec32-2a78-4078-baab-77cc268a2db8 is in state STARTED 2025-09-19 16:56:34.813873 | orchestrator | 2025-09-19 16:56:34 | INFO  | Task 0149d4bc-cd05-4d77-8467-b9f6d33beb9a is in state STARTED 2025-09-19 16:56:34.813883 | orchestrator | 2025-09-19 16:56:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:37.847618 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:37.847726 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:37.848221 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:37.848685 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:37.849203 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 0d3eec32-2a78-4078-baab-77cc268a2db8 is in state STARTED 2025-09-19 16:56:37.850604 | orchestrator | 2025-09-19 16:56:37 | INFO  | Task 0149d4bc-cd05-4d77-8467-b9f6d33beb9a is in state STARTED 2025-09-19 16:56:37.850658 | orchestrator | 2025-09-19 16:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:40.933788 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:40.933936 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:40.933952 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:40.933963 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:40.935576 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 0d3eec32-2a78-4078-baab-77cc268a2db8 is in state STARTED 2025-09-19 16:56:40.935952 | orchestrator | 2025-09-19 16:56:40 | INFO  | Task 0149d4bc-cd05-4d77-8467-b9f6d33beb9a is in state SUCCESS 2025-09-19 16:56:40.935975 | orchestrator | 2025-09-19 16:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:43.962575 | orchestrator | 2025-09-19 16:56:43 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:43.964018 | orchestrator | 2025-09-19 16:56:43 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:43.965654 | orchestrator | 2025-09-19 16:56:43 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:43.967201 | orchestrator | 2025-09-19 16:56:43 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:43.968218 | orchestrator | 2025-09-19 16:56:43 | INFO  | Task 0d3eec32-2a78-4078-baab-77cc268a2db8 is in state SUCCESS 2025-09-19 16:56:43.968261 | orchestrator | 2025-09-19 16:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:47.040030 | orchestrator | 2025-09-19 16:56:47 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:47.044265 | orchestrator | 2025-09-19 16:56:47 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:47.046162 | orchestrator | 2025-09-19 16:56:47 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:47.048169 | orchestrator | 2025-09-19 16:56:47 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:47.048916 | orchestrator | 2025-09-19 16:56:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:50.110089 | orchestrator | 2025-09-19 16:56:50 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:50.111235 | orchestrator | 2025-09-19 16:56:50 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:50.111855 | orchestrator | 2025-09-19 16:56:50 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:50.112992 | orchestrator | 2025-09-19 16:56:50 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:50.113030 | orchestrator | 2025-09-19 16:56:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:53.155227 | orchestrator | 2025-09-19 16:56:53 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:53.155800 | orchestrator | 2025-09-19 16:56:53 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:53.157272 | orchestrator | 2025-09-19 16:56:53 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:53.158767 | orchestrator | 2025-09-19 16:56:53 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:53.159095 | orchestrator | 2025-09-19 16:56:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:56.205202 | orchestrator | 2025-09-19 16:56:56 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:56.206970 | orchestrator | 2025-09-19 16:56:56 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:56.212415 | orchestrator | 2025-09-19 16:56:56 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:56.213138 | orchestrator | 2025-09-19 16:56:56 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:56.213167 | orchestrator | 2025-09-19 16:56:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:56:59.250270 | orchestrator | 2025-09-19 16:56:59 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:56:59.251341 | orchestrator | 2025-09-19 16:56:59 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:56:59.252992 | orchestrator | 2025-09-19 16:56:59 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:56:59.256450 | orchestrator | 2025-09-19 16:56:59 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:56:59.256480 | orchestrator | 2025-09-19 16:56:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:02.302697 | orchestrator | 2025-09-19 16:57:02 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:02.304242 | orchestrator | 2025-09-19 16:57:02 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:02.306654 | orchestrator | 2025-09-19 16:57:02 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:02.309629 | orchestrator | 2025-09-19 16:57:02 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:02.309653 | orchestrator | 2025-09-19 16:57:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:05.355998 | orchestrator | 2025-09-19 16:57:05 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:05.358303 | orchestrator | 2025-09-19 16:57:05 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:05.360657 | orchestrator | 2025-09-19 16:57:05 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:05.362271 | orchestrator | 2025-09-19 16:57:05 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:05.362454 | orchestrator | 2025-09-19 16:57:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:08.399933 | orchestrator | 2025-09-19 16:57:08 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:08.400041 | orchestrator | 2025-09-19 16:57:08 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:08.401136 | orchestrator | 2025-09-19 16:57:08 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:08.402672 | orchestrator | 2025-09-19 16:57:08 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:08.402762 | orchestrator | 2025-09-19 16:57:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:11.446623 | orchestrator | 2025-09-19 16:57:11 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:11.448572 | orchestrator | 2025-09-19 16:57:11 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:11.450380 | orchestrator | 2025-09-19 16:57:11 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:11.451967 | orchestrator | 2025-09-19 16:57:11 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:11.452001 | orchestrator | 2025-09-19 16:57:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:14.498453 | orchestrator | 2025-09-19 16:57:14 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:14.499987 | orchestrator | 2025-09-19 16:57:14 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:14.502528 | orchestrator | 2025-09-19 16:57:14 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:14.505211 | orchestrator | 2025-09-19 16:57:14 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:14.505274 | orchestrator | 2025-09-19 16:57:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:17.544008 | orchestrator | 2025-09-19 16:57:17 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:17.547112 | orchestrator | 2025-09-19 16:57:17 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:17.551661 | orchestrator | 2025-09-19 16:57:17 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:17.554089 | orchestrator | 2025-09-19 16:57:17 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:17.554327 | orchestrator | 2025-09-19 16:57:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:20.610889 | orchestrator | 2025-09-19 16:57:20 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:20.612649 | orchestrator | 2025-09-19 16:57:20 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:20.614713 | orchestrator | 2025-09-19 16:57:20 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:20.617003 | orchestrator | 2025-09-19 16:57:20 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:20.617032 | orchestrator | 2025-09-19 16:57:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:23.646591 | orchestrator | 2025-09-19 16:57:23 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:23.646714 | orchestrator | 2025-09-19 16:57:23 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:23.647479 | orchestrator | 2025-09-19 16:57:23 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:23.648624 | orchestrator | 2025-09-19 16:57:23 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:23.648655 | orchestrator | 2025-09-19 16:57:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:26.678407 | orchestrator | 2025-09-19 16:57:26 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:26.679773 | orchestrator | 2025-09-19 16:57:26 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:26.682431 | orchestrator | 2025-09-19 16:57:26 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:26.683619 | orchestrator | 2025-09-19 16:57:26 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:26.684017 | orchestrator | 2025-09-19 16:57:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:29.722901 | orchestrator | 2025-09-19 16:57:29 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:29.725506 | orchestrator | 2025-09-19 16:57:29 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:29.727581 | orchestrator | 2025-09-19 16:57:29 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:29.729748 | orchestrator | 2025-09-19 16:57:29 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:29.729774 | orchestrator | 2025-09-19 16:57:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:32.773633 | orchestrator | 2025-09-19 16:57:32 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:32.773809 | orchestrator | 2025-09-19 16:57:32 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:32.775055 | orchestrator | 2025-09-19 16:57:32 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:32.776027 | orchestrator | 2025-09-19 16:57:32 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:32.776051 | orchestrator | 2025-09-19 16:57:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:35.819288 | orchestrator | 2025-09-19 16:57:35 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:35.822356 | orchestrator | 2025-09-19 16:57:35 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:35.823203 | orchestrator | 2025-09-19 16:57:35 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:35.824113 | orchestrator | 2025-09-19 16:57:35 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:35.824160 | orchestrator | 2025-09-19 16:57:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:38.860223 | orchestrator | 2025-09-19 16:57:38 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:38.861793 | orchestrator | 2025-09-19 16:57:38 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:38.864135 | orchestrator | 2025-09-19 16:57:38 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:38.866772 | orchestrator | 2025-09-19 16:57:38 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:38.866811 | orchestrator | 2025-09-19 16:57:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:41.903339 | orchestrator | 2025-09-19 16:57:41 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:41.904568 | orchestrator | 2025-09-19 16:57:41 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:41.905027 | orchestrator | 2025-09-19 16:57:41 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:41.907374 | orchestrator | 2025-09-19 16:57:41 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:41.907399 | orchestrator | 2025-09-19 16:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:44.945358 | orchestrator | 2025-09-19 16:57:44 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:44.948651 | orchestrator | 2025-09-19 16:57:44 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:44.953604 | orchestrator | 2025-09-19 16:57:44 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:44.956251 | orchestrator | 2025-09-19 16:57:44 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:44.956332 | orchestrator | 2025-09-19 16:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:47.995112 | orchestrator | 2025-09-19 16:57:47 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:47.995932 | orchestrator | 2025-09-19 16:57:47 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:47.997180 | orchestrator | 2025-09-19 16:57:47 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:47.998167 | orchestrator | 2025-09-19 16:57:47 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:47.998198 | orchestrator | 2025-09-19 16:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:51.045518 | orchestrator | 2025-09-19 16:57:51 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:51.046148 | orchestrator | 2025-09-19 16:57:51 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:51.046721 | orchestrator | 2025-09-19 16:57:51 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state STARTED 2025-09-19 16:57:51.047713 | orchestrator | 2025-09-19 16:57:51 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:51.047742 | orchestrator | 2025-09-19 16:57:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:54.080972 | orchestrator | 2025-09-19 16:57:54 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:54.081180 | orchestrator | 2025-09-19 16:57:54 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:54.083325 | orchestrator | 2025-09-19 16:57:54 | INFO  | Task 2aaccbc9-fd11-4963-8cd0-4399810fff54 is in state SUCCESS 2025-09-19 16:57:54.083667 | orchestrator | 2025-09-19 16:57:54.083696 | orchestrator | 2025-09-19 16:57:54.083709 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-19 16:57:54.083721 | orchestrator | 2025-09-19 16:57:54.083732 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 16:57:54.083743 | orchestrator | Friday 19 September 2025 16:56:36 +0000 (0:00:00.120) 0:00:00.120 ****** 2025-09-19 16:57:54.083755 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 16:57:54.083766 | orchestrator | 2025-09-19 16:57:54.083777 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 16:57:54.083788 | orchestrator | Friday 19 September 2025 16:56:37 +0000 (0:00:00.715) 0:00:00.835 ****** 2025-09-19 16:57:54.083799 | orchestrator | changed: [testbed-manager] 2025-09-19 16:57:54.083811 | orchestrator | 2025-09-19 16:57:54.083822 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-19 16:57:54.083857 | orchestrator | Friday 19 September 2025 16:56:38 +0000 (0:00:01.275) 0:00:02.111 ****** 2025-09-19 16:57:54.083869 | orchestrator | changed: [testbed-manager] 2025-09-19 16:57:54.083880 | orchestrator | 2025-09-19 16:57:54.083891 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:57:54.083918 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:57:54.083931 | orchestrator | 2025-09-19 16:57:54.083942 | orchestrator | 2025-09-19 16:57:54.083953 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:57:54.083964 | orchestrator | Friday 19 September 2025 16:56:39 +0000 (0:00:00.474) 0:00:02.586 ****** 2025-09-19 16:57:54.083975 | orchestrator | =============================================================================== 2025-09-19 16:57:54.083986 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.28s 2025-09-19 16:57:54.083998 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.72s 2025-09-19 16:57:54.084094 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2025-09-19 16:57:54.084109 | orchestrator | 2025-09-19 16:57:54.084120 | orchestrator | 2025-09-19 16:57:54.084131 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-19 16:57:54.084142 | orchestrator | 2025-09-19 16:57:54.084153 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-19 16:57:54.084220 | orchestrator | Friday 19 September 2025 16:56:37 +0000 (0:00:00.187) 0:00:00.187 ****** 2025-09-19 16:57:54.084232 | orchestrator | ok: [testbed-manager] 2025-09-19 16:57:54.084244 | orchestrator | 2025-09-19 16:57:54.084255 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-19 16:57:54.084266 | orchestrator | Friday 19 September 2025 16:56:37 +0000 (0:00:00.584) 0:00:00.772 ****** 2025-09-19 16:57:54.084279 | orchestrator | ok: [testbed-manager] 2025-09-19 16:57:54.084291 | orchestrator | 2025-09-19 16:57:54.084304 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-19 16:57:54.084316 | orchestrator | Friday 19 September 2025 16:56:38 +0000 (0:00:00.534) 0:00:01.306 ****** 2025-09-19 16:57:54.084329 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-19 16:57:54.084341 | orchestrator | 2025-09-19 16:57:54.084354 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-19 16:57:54.084367 | orchestrator | Friday 19 September 2025 16:56:39 +0000 (0:00:00.926) 0:00:02.233 ****** 2025-09-19 16:57:54.084379 | orchestrator | changed: [testbed-manager] 2025-09-19 16:57:54.084392 | orchestrator | 2025-09-19 16:57:54.084405 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-19 16:57:54.084417 | orchestrator | Friday 19 September 2025 16:56:40 +0000 (0:00:01.059) 0:00:03.292 ****** 2025-09-19 16:57:54.084430 | orchestrator | changed: [testbed-manager] 2025-09-19 16:57:54.084442 | orchestrator | 2025-09-19 16:57:54.084455 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-19 16:57:54.084467 | orchestrator | Friday 19 September 2025 16:56:41 +0000 (0:00:00.730) 0:00:04.022 ****** 2025-09-19 16:57:54.084480 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 16:57:54.084493 | orchestrator | 2025-09-19 16:57:54.084506 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-19 16:57:54.084520 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:01.242) 0:00:05.265 ****** 2025-09-19 16:57:54.084533 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 16:57:54.084544 | orchestrator | 2025-09-19 16:57:54.084555 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-19 16:57:54.084566 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:00.743) 0:00:06.008 ****** 2025-09-19 16:57:54.084577 | orchestrator | ok: [testbed-manager] 2025-09-19 16:57:54.084588 | orchestrator | 2025-09-19 16:57:54.084599 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-19 16:57:54.084610 | orchestrator | Friday 19 September 2025 16:56:43 +0000 (0:00:00.381) 0:00:06.390 ****** 2025-09-19 16:57:54.084621 | orchestrator | ok: [testbed-manager] 2025-09-19 16:57:54.084632 | orchestrator | 2025-09-19 16:57:54.084643 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:57:54.084654 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:57:54.084665 | orchestrator | 2025-09-19 16:57:54.084676 | orchestrator | 2025-09-19 16:57:54.084687 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:57:54.084697 | orchestrator | Friday 19 September 2025 16:56:43 +0000 (0:00:00.282) 0:00:06.673 ****** 2025-09-19 16:57:54.084708 | orchestrator | =============================================================================== 2025-09-19 16:57:54.084719 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.24s 2025-09-19 16:57:54.084730 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.06s 2025-09-19 16:57:54.084751 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.93s 2025-09-19 16:57:54.084774 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.74s 2025-09-19 16:57:54.084786 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.73s 2025-09-19 16:57:54.084796 | orchestrator | Get home directory of operator user ------------------------------------- 0.58s 2025-09-19 16:57:54.084807 | orchestrator | Create .kube directory -------------------------------------------------- 0.53s 2025-09-19 16:57:54.084818 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.38s 2025-09-19 16:57:54.084828 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-09-19 16:57:54.084861 | orchestrator | 2025-09-19 16:57:54.085045 | orchestrator | 2025-09-19 16:57:54.085059 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-19 16:57:54.085070 | orchestrator | 2025-09-19 16:57:54.085081 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 16:57:54.085091 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:00.155) 0:00:00.155 ****** 2025-09-19 16:57:54.085102 | orchestrator | ok: [localhost] => { 2025-09-19 16:57:54.085121 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-19 16:57:54.085133 | orchestrator | } 2025-09-19 16:57:54.085144 | orchestrator | 2025-09-19 16:57:54.085155 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-19 16:57:54.085166 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:00.222) 0:00:00.378 ****** 2025-09-19 16:57:54.085177 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-19 16:57:54.085190 | orchestrator | ...ignoring 2025-09-19 16:57:54.085201 | orchestrator | 2025-09-19 16:57:54.085211 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-19 16:57:54.085222 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:02.966) 0:00:03.344 ****** 2025-09-19 16:57:54.085232 | orchestrator | skipping: [localhost] 2025-09-19 16:57:54.085243 | orchestrator | 2025-09-19 16:57:54.085254 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-19 16:57:54.085265 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:00.116) 0:00:03.460 ****** 2025-09-19 16:57:54.085275 | orchestrator | ok: [localhost] 2025-09-19 16:57:54.085286 | orchestrator | 2025-09-19 16:57:54.085297 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:57:54.085308 | orchestrator | 2025-09-19 16:57:54.085318 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 16:57:54.085329 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:00.315) 0:00:03.776 ****** 2025-09-19 16:57:54.085340 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:57:54.085351 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:57:54.085362 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:57:54.085372 | orchestrator | 2025-09-19 16:57:54.085383 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:57:54.085394 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:00.273) 0:00:04.050 ****** 2025-09-19 16:57:54.085404 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-19 16:57:54.085415 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-19 16:57:54.085425 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-19 16:57:54.085436 | orchestrator | 2025-09-19 16:57:54.085447 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-19 16:57:54.085458 | orchestrator | 2025-09-19 16:57:54.085469 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 16:57:54.085479 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:00.548) 0:00:04.598 ****** 2025-09-19 16:57:54.085498 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:57:54.085509 | orchestrator | 2025-09-19 16:57:54.085519 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 16:57:54.085530 | orchestrator | Friday 19 September 2025 16:55:39 +0000 (0:00:02.137) 0:00:06.735 ****** 2025-09-19 16:57:54.085541 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:57:54.085551 | orchestrator | 2025-09-19 16:57:54.085562 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-19 16:57:54.085572 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:01.274) 0:00:08.010 ****** 2025-09-19 16:57:54.085583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085594 | orchestrator | 2025-09-19 16:57:54.085604 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-19 16:57:54.085615 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:00.600) 0:00:08.610 ****** 2025-09-19 16:57:54.085626 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085636 | orchestrator | 2025-09-19 16:57:54.085647 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-19 16:57:54.085659 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:00.284) 0:00:08.895 ****** 2025-09-19 16:57:54.085671 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085684 | orchestrator | 2025-09-19 16:57:54.085697 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-19 16:57:54.085710 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:00.300) 0:00:09.195 ****** 2025-09-19 16:57:54.085722 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085734 | orchestrator | 2025-09-19 16:57:54.085746 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 16:57:54.085759 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:00.793) 0:00:09.989 ****** 2025-09-19 16:57:54.085771 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:57:54.085783 | orchestrator | 2025-09-19 16:57:54.085796 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-19 16:57:54.085808 | orchestrator | Friday 19 September 2025 16:55:43 +0000 (0:00:00.940) 0:00:10.929 ****** 2025-09-19 16:57:54.085821 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:57:54.085870 | orchestrator | 2025-09-19 16:57:54.085884 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-19 16:57:54.085896 | orchestrator | Friday 19 September 2025 16:55:44 +0000 (0:00:00.989) 0:00:11.919 ****** 2025-09-19 16:57:54.085909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085922 | orchestrator | 2025-09-19 16:57:54.085935 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-19 16:57:54.085947 | orchestrator | Friday 19 September 2025 16:55:45 +0000 (0:00:00.801) 0:00:12.720 ****** 2025-09-19 16:57:54.085960 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.085972 | orchestrator | 2025-09-19 16:57:54.085998 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-19 16:57:54.086011 | orchestrator | Friday 19 September 2025 16:55:46 +0000 (0:00:00.408) 0:00:13.128 ****** 2025-09-19 16:57:54.086090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086145 | orchestrator | 2025-09-19 16:57:54.086156 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-19 16:57:54.086167 | orchestrator | Friday 19 September 2025 16:55:47 +0000 (0:00:00.926) 0:00:14.055 ****** 2025-09-19 16:57:54.086188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086365 | orchestrator | 2025-09-19 16:57:54.086376 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-19 16:57:54.086388 | orchestrator | Friday 19 September 2025 16:55:50 +0000 (0:00:03.075) 0:00:17.130 ****** 2025-09-19 16:57:54.086399 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 16:57:54.086410 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 16:57:54.086421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-19 16:57:54.086432 | orchestrator | 2025-09-19 16:57:54.086443 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-19 16:57:54.086454 | orchestrator | Friday 19 September 2025 16:55:52 +0000 (0:00:02.351) 0:00:19.482 ****** 2025-09-19 16:57:54.086464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 16:57:54.086475 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 16:57:54.086485 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-19 16:57:54.086496 | orchestrator | 2025-09-19 16:57:54.086506 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-19 16:57:54.086517 | orchestrator | Friday 19 September 2025 16:55:54 +0000 (0:00:02.308) 0:00:21.790 ****** 2025-09-19 16:57:54.086528 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 16:57:54.086539 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 16:57:54.086549 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-19 16:57:54.086560 | orchestrator | 2025-09-19 16:57:54.086571 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-19 16:57:54.086582 | orchestrator | Friday 19 September 2025 16:55:56 +0000 (0:00:01.731) 0:00:23.521 ****** 2025-09-19 16:57:54.086601 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 16:57:54.086622 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 16:57:54.086633 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-19 16:57:54.086643 | orchestrator | 2025-09-19 16:57:54.086659 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-19 16:57:54.086670 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:04.340) 0:00:27.862 ****** 2025-09-19 16:57:54.086681 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 16:57:54.086692 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 16:57:54.086703 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-19 16:57:54.086713 | orchestrator | 2025-09-19 16:57:54.086724 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-19 16:57:54.086735 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:01.907) 0:00:29.770 ****** 2025-09-19 16:57:54.086746 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 16:57:54.086756 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 16:57:54.086767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-19 16:57:54.086778 | orchestrator | 2025-09-19 16:57:54.086789 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-19 16:57:54.086799 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:02.087) 0:00:31.858 ****** 2025-09-19 16:57:54.086810 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.086821 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:57:54.086890 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:57:54.086903 | orchestrator | 2025-09-19 16:57:54.086914 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-19 16:57:54.086925 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.636) 0:00:32.494 ****** 2025-09-19 16:57:54.086937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 16:57:54.086996 | orchestrator | 2025-09-19 16:57:54.087007 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-19 16:57:54.087018 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:01.918) 0:00:34.413 ****** 2025-09-19 16:57:54.087029 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:57:54.087040 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:57:54.087051 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:57:54.087062 | orchestrator | 2025-09-19 16:57:54.087073 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-19 16:57:54.087084 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:01.117) 0:00:35.530 ****** 2025-09-19 16:57:54.087095 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:57:54.087105 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:57:54.087116 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:57:54.087127 | orchestrator | 2025-09-19 16:57:54.087138 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-19 16:57:54.087149 | orchestrator | Friday 19 September 2025 16:56:15 +0000 (0:00:07.263) 0:00:42.794 ****** 2025-09-19 16:57:54.087160 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:57:54.087171 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:57:54.087182 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:57:54.087192 | orchestrator | 2025-09-19 16:57:54.087203 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 16:57:54.087214 | orchestrator | 2025-09-19 16:57:54.087225 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 16:57:54.087236 | orchestrator | Friday 19 September 2025 16:56:16 +0000 (0:00:00.684) 0:00:43.479 ****** 2025-09-19 16:57:54.087247 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:57:54.087258 | orchestrator | 2025-09-19 16:57:54.087269 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 16:57:54.087280 | orchestrator | Friday 19 September 2025 16:56:17 +0000 (0:00:00.613) 0:00:44.092 ****** 2025-09-19 16:57:54.087290 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:57:54.087301 | orchestrator | 2025-09-19 16:57:54.087312 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 16:57:54.087323 | orchestrator | Friday 19 September 2025 16:56:17 +0000 (0:00:00.211) 0:00:44.304 ****** 2025-09-19 16:57:54.087334 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:57:54.087345 | orchestrator | 2025-09-19 16:57:54.087356 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 16:57:54.087367 | orchestrator | Friday 19 September 2025 16:56:19 +0000 (0:00:01.761) 0:00:46.065 ****** 2025-09-19 16:57:54.087385 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:57:54.087396 | orchestrator | 2025-09-19 16:57:54.087407 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 16:57:54.087418 | orchestrator | 2025-09-19 16:57:54.087429 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 16:57:54.087440 | orchestrator | Friday 19 September 2025 16:57:15 +0000 (0:00:56.228) 0:01:42.293 ****** 2025-09-19 16:57:54.087450 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:57:54.087460 | orchestrator | 2025-09-19 16:57:54.087470 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 16:57:54.087479 | orchestrator | Friday 19 September 2025 16:57:15 +0000 (0:00:00.635) 0:01:42.928 ****** 2025-09-19 16:57:54.087489 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:57:54.087499 | orchestrator | 2025-09-19 16:57:54.087508 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 16:57:54.087518 | orchestrator | Friday 19 September 2025 16:57:16 +0000 (0:00:00.216) 0:01:43.145 ****** 2025-09-19 16:57:54.087528 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:57:54.087537 | orchestrator | 2025-09-19 16:57:54.087547 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 16:57:54.087557 | orchestrator | Friday 19 September 2025 16:57:23 +0000 (0:00:06.941) 0:01:50.087 ****** 2025-09-19 16:57:54.087566 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:57:54.087576 | orchestrator | 2025-09-19 16:57:54.087585 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-19 16:57:54.087595 | orchestrator | 2025-09-19 16:57:54.087605 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-19 16:57:54.087614 | orchestrator | Friday 19 September 2025 16:57:34 +0000 (0:00:11.253) 0:02:01.340 ****** 2025-09-19 16:57:54.087624 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:57:54.087633 | orchestrator | 2025-09-19 16:57:54.087643 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-19 16:57:54.087653 | orchestrator | Friday 19 September 2025 16:57:34 +0000 (0:00:00.564) 0:02:01.904 ****** 2025-09-19 16:57:54.087662 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:57:54.087672 | orchestrator | 2025-09-19 16:57:54.087681 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-19 16:57:54.087691 | orchestrator | Friday 19 September 2025 16:57:35 +0000 (0:00:00.243) 0:02:02.148 ****** 2025-09-19 16:57:54.087701 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:57:54.087710 | orchestrator | 2025-09-19 16:57:54.087720 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-19 16:57:54.087735 | orchestrator | Friday 19 September 2025 16:57:36 +0000 (0:00:01.579) 0:02:03.728 ****** 2025-09-19 16:57:54.087745 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:57:54.087754 | orchestrator | 2025-09-19 16:57:54.087764 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-19 16:57:54.087773 | orchestrator | 2025-09-19 16:57:54.087783 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-19 16:57:54.087793 | orchestrator | Friday 19 September 2025 16:57:50 +0000 (0:00:13.755) 0:02:17.484 ****** 2025-09-19 16:57:54.087807 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:57:54.087817 | orchestrator | 2025-09-19 16:57:54.087826 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-19 16:57:54.087851 | orchestrator | Friday 19 September 2025 16:57:50 +0000 (0:00:00.502) 0:02:17.986 ****** 2025-09-19 16:57:54.087861 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 16:57:54.087871 | orchestrator | enable_outward_rabbitmq_True 2025-09-19 16:57:54.087880 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 16:57:54.087889 | orchestrator | outward_rabbitmq_restart 2025-09-19 16:57:54.087899 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:57:54.087908 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:57:54.087925 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:57:54.087935 | orchestrator | 2025-09-19 16:57:54.087944 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-19 16:57:54.087954 | orchestrator | skipping: no hosts matched 2025-09-19 16:57:54.087964 | orchestrator | 2025-09-19 16:57:54.087973 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-19 16:57:54.087983 | orchestrator | skipping: no hosts matched 2025-09-19 16:57:54.087992 | orchestrator | 2025-09-19 16:57:54.088002 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-19 16:57:54.088012 | orchestrator | skipping: no hosts matched 2025-09-19 16:57:54.088021 | orchestrator | 2025-09-19 16:57:54.088031 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:57:54.088041 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 16:57:54.088051 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 16:57:54.088061 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:57:54.088071 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 16:57:54.088080 | orchestrator | 2025-09-19 16:57:54.088090 | orchestrator | 2025-09-19 16:57:54.088099 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:57:54.088109 | orchestrator | Friday 19 September 2025 16:57:53 +0000 (0:00:02.424) 0:02:20.410 ****** 2025-09-19 16:57:54.088119 | orchestrator | =============================================================================== 2025-09-19 16:57:54.088128 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 81.24s 2025-09-19 16:57:54.088138 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.28s 2025-09-19 16:57:54.088147 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.26s 2025-09-19 16:57:54.088157 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 4.34s 2025-09-19 16:57:54.088166 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.08s 2025-09-19 16:57:54.088175 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.97s 2025-09-19 16:57:54.088185 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.42s 2025-09-19 16:57:54.088194 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.35s 2025-09-19 16:57:54.088204 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.31s 2025-09-19 16:57:54.088213 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 2.14s 2025-09-19 16:57:54.088223 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.09s 2025-09-19 16:57:54.088232 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.92s 2025-09-19 16:57:54.088242 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.91s 2025-09-19 16:57:54.088251 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.81s 2025-09-19 16:57:54.088261 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.73s 2025-09-19 16:57:54.088270 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.27s 2025-09-19 16:57:54.088280 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.12s 2025-09-19 16:57:54.088289 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.99s 2025-09-19 16:57:54.088299 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.94s 2025-09-19 16:57:54.088308 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.93s 2025-09-19 16:57:54.088402 | orchestrator | 2025-09-19 16:57:54 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:54.088415 | orchestrator | 2025-09-19 16:57:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:57:57.128611 | orchestrator | 2025-09-19 16:57:57 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:57:57.129904 | orchestrator | 2025-09-19 16:57:57 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:57:57.130576 | orchestrator | 2025-09-19 16:57:57 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:57:57.130622 | orchestrator | 2025-09-19 16:57:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:00.166415 | orchestrator | 2025-09-19 16:58:00 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:00.168290 | orchestrator | 2025-09-19 16:58:00 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:00.170443 | orchestrator | 2025-09-19 16:58:00 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:00.170597 | orchestrator | 2025-09-19 16:58:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:03.211074 | orchestrator | 2025-09-19 16:58:03 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:03.213620 | orchestrator | 2025-09-19 16:58:03 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:03.215338 | orchestrator | 2025-09-19 16:58:03 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:03.215903 | orchestrator | 2025-09-19 16:58:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:06.250286 | orchestrator | 2025-09-19 16:58:06 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:06.250458 | orchestrator | 2025-09-19 16:58:06 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:06.251070 | orchestrator | 2025-09-19 16:58:06 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:06.251099 | orchestrator | 2025-09-19 16:58:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:09.297806 | orchestrator | 2025-09-19 16:58:09 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:09.312578 | orchestrator | 2025-09-19 16:58:09 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:09.315197 | orchestrator | 2025-09-19 16:58:09 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:09.315277 | orchestrator | 2025-09-19 16:58:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:12.347991 | orchestrator | 2025-09-19 16:58:12 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:12.353505 | orchestrator | 2025-09-19 16:58:12 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:12.355553 | orchestrator | 2025-09-19 16:58:12 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:12.355578 | orchestrator | 2025-09-19 16:58:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:15.387959 | orchestrator | 2025-09-19 16:58:15 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:15.388060 | orchestrator | 2025-09-19 16:58:15 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:15.391690 | orchestrator | 2025-09-19 16:58:15 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:15.391748 | orchestrator | 2025-09-19 16:58:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:18.433582 | orchestrator | 2025-09-19 16:58:18 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:18.433927 | orchestrator | 2025-09-19 16:58:18 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:18.436463 | orchestrator | 2025-09-19 16:58:18 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:18.436521 | orchestrator | 2025-09-19 16:58:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:21.479355 | orchestrator | 2025-09-19 16:58:21 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:21.479607 | orchestrator | 2025-09-19 16:58:21 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:21.480437 | orchestrator | 2025-09-19 16:58:21 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:21.480550 | orchestrator | 2025-09-19 16:58:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:24.536301 | orchestrator | 2025-09-19 16:58:24 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:24.537504 | orchestrator | 2025-09-19 16:58:24 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:24.539116 | orchestrator | 2025-09-19 16:58:24 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:24.539427 | orchestrator | 2025-09-19 16:58:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:27.589322 | orchestrator | 2025-09-19 16:58:27 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:27.590198 | orchestrator | 2025-09-19 16:58:27 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:27.592367 | orchestrator | 2025-09-19 16:58:27 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:27.592458 | orchestrator | 2025-09-19 16:58:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:30.626962 | orchestrator | 2025-09-19 16:58:30 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:30.627073 | orchestrator | 2025-09-19 16:58:30 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:30.627453 | orchestrator | 2025-09-19 16:58:30 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:30.628322 | orchestrator | 2025-09-19 16:58:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:33.666544 | orchestrator | 2025-09-19 16:58:33 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:33.667593 | orchestrator | 2025-09-19 16:58:33 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:33.668529 | orchestrator | 2025-09-19 16:58:33 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:33.668554 | orchestrator | 2025-09-19 16:58:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:36.702610 | orchestrator | 2025-09-19 16:58:36 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:36.703912 | orchestrator | 2025-09-19 16:58:36 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state STARTED 2025-09-19 16:58:36.705287 | orchestrator | 2025-09-19 16:58:36 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:36.705427 | orchestrator | 2025-09-19 16:58:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:39.755805 | orchestrator | 2025-09-19 16:58:39 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:39.758672 | orchestrator | 2025-09-19 16:58:39 | INFO  | Task 6fc9111b-1a0c-43c9-85ef-51d4f011d3ac is in state SUCCESS 2025-09-19 16:58:39.760802 | orchestrator | 2025-09-19 16:58:39.760932 | orchestrator | 2025-09-19 16:58:39.761001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 16:58:39.761018 | orchestrator | 2025-09-19 16:58:39.761034 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 16:58:39.761050 | orchestrator | Friday 19 September 2025 16:56:12 +0000 (0:00:00.314) 0:00:00.314 ****** 2025-09-19 16:58:39.761108 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.761127 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.761142 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.761157 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:58:39.761171 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:58:39.761185 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:58:39.761198 | orchestrator | 2025-09-19 16:58:39.761207 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 16:58:39.761216 | orchestrator | Friday 19 September 2025 16:56:13 +0000 (0:00:00.911) 0:00:01.226 ****** 2025-09-19 16:58:39.761225 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-19 16:58:39.761233 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-19 16:58:39.761242 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-19 16:58:39.761250 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-19 16:58:39.761259 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-19 16:58:39.761267 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-19 16:58:39.761276 | orchestrator | 2025-09-19 16:58:39.761284 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-19 16:58:39.761293 | orchestrator | 2025-09-19 16:58:39.761301 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-19 16:58:39.761310 | orchestrator | Friday 19 September 2025 16:56:14 +0000 (0:00:00.900) 0:00:02.126 ****** 2025-09-19 16:58:39.761320 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 16:58:39.761330 | orchestrator | 2025-09-19 16:58:39.761339 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-19 16:58:39.761349 | orchestrator | Friday 19 September 2025 16:56:16 +0000 (0:00:01.527) 0:00:03.654 ****** 2025-09-19 16:58:39.761361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761401 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761445 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761455 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761466 | orchestrator | 2025-09-19 16:58:39.761487 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-19 16:58:39.761498 | orchestrator | Friday 19 September 2025 16:56:17 +0000 (0:00:01.491) 0:00:05.145 ****** 2025-09-19 16:58:39.761508 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761539 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761564 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761580 | orchestrator | 2025-09-19 16:58:39.761590 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-19 16:58:39.761600 | orchestrator | Friday 19 September 2025 16:56:19 +0000 (0:00:01.741) 0:00:06.887 ****** 2025-09-19 16:58:39.761610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761680 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761711 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761727 | orchestrator | 2025-09-19 16:58:39.761738 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-19 16:58:39.761747 | orchestrator | Friday 19 September 2025 16:56:21 +0000 (0:00:01.651) 0:00:08.539 ****** 2025-09-19 16:58:39.761756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761795 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761804 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761813 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761822 | orchestrator | 2025-09-19 16:58:39.761854 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-19 16:58:39.761888 | orchestrator | Friday 19 September 2025 16:56:23 +0000 (0:00:02.581) 0:00:11.120 ****** 2025-09-19 16:58:39.761899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761928 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761989 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.761998 | orchestrator | 2025-09-19 16:58:39.762007 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-19 16:58:39.762066 | orchestrator | Friday 19 September 2025 16:56:25 +0000 (0:00:01.444) 0:00:12.565 ****** 2025-09-19 16:58:39.762083 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.762098 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.762113 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.762128 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:58:39.762143 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:58:39.762158 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:58:39.762200 | orchestrator | 2025-09-19 16:58:39.762214 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-19 16:58:39.762228 | orchestrator | Friday 19 September 2025 16:56:29 +0000 (0:00:04.037) 0:00:16.602 ****** 2025-09-19 16:58:39.762243 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-19 16:58:39.762258 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-19 16:58:39.762274 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-19 16:58:39.762288 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-19 16:58:39.762300 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-19 16:58:39.762309 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-19 16:58:39.762318 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762326 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762341 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762350 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762358 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-19 16:58:39.762375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762389 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762404 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762418 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762443 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762458 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-19 16:58:39.762474 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762490 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762505 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762515 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762523 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762532 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-19 16:58:39.762540 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762554 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762563 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762580 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762588 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-19 16:58:39.762597 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762605 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762614 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762622 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762631 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762639 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-19 16:58:39.762648 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 16:58:39.762656 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 16:58:39.762665 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 16:58:39.762673 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 16:58:39.762682 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-19 16:58:39.762690 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-19 16:58:39.762699 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-19 16:58:39.762708 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-19 16:58:39.762723 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-19 16:58:39.762738 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-19 16:58:39.762747 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-19 16:58:39.762756 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-19 16:58:39.762764 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 16:58:39.762773 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 16:58:39.762781 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 16:58:39.762790 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 16:58:39.762799 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-19 16:58:39.762807 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-19 16:58:39.762816 | orchestrator | 2025-09-19 16:58:39.762824 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762850 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:18.794) 0:00:35.397 ****** 2025-09-19 16:58:39.762859 | orchestrator | 2025-09-19 16:58:39.762868 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762877 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.249) 0:00:35.646 ****** 2025-09-19 16:58:39.762885 | orchestrator | 2025-09-19 16:58:39.762894 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762903 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.071) 0:00:35.718 ****** 2025-09-19 16:58:39.762911 | orchestrator | 2025-09-19 16:58:39.762920 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762928 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.065) 0:00:35.783 ****** 2025-09-19 16:58:39.762937 | orchestrator | 2025-09-19 16:58:39.762946 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762959 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.065) 0:00:35.849 ****** 2025-09-19 16:58:39.762968 | orchestrator | 2025-09-19 16:58:39.762977 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-19 16:58:39.762985 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.070) 0:00:35.920 ****** 2025-09-19 16:58:39.762994 | orchestrator | 2025-09-19 16:58:39.763002 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-19 16:58:39.763011 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.069) 0:00:35.989 ****** 2025-09-19 16:58:39.763020 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763028 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763037 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763045 | orchestrator | ok: [testbed-node-3] 2025-09-19 16:58:39.763054 | orchestrator | ok: [testbed-node-4] 2025-09-19 16:58:39.763062 | orchestrator | ok: [testbed-node-5] 2025-09-19 16:58:39.763071 | orchestrator | 2025-09-19 16:58:39.763080 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-19 16:58:39.763088 | orchestrator | Friday 19 September 2025 16:56:50 +0000 (0:00:01.752) 0:00:37.742 ****** 2025-09-19 16:58:39.763097 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.763106 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.763114 | orchestrator | changed: [testbed-node-3] 2025-09-19 16:58:39.763129 | orchestrator | changed: [testbed-node-5] 2025-09-19 16:58:39.763138 | orchestrator | changed: [testbed-node-4] 2025-09-19 16:58:39.763146 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.763155 | orchestrator | 2025-09-19 16:58:39.763164 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-19 16:58:39.763172 | orchestrator | 2025-09-19 16:58:39.763181 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 16:58:39.763190 | orchestrator | Friday 19 September 2025 16:57:21 +0000 (0:00:30.867) 0:01:08.609 ****** 2025-09-19 16:58:39.763198 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:58:39.763207 | orchestrator | 2025-09-19 16:58:39.763216 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 16:58:39.763224 | orchestrator | Friday 19 September 2025 16:57:21 +0000 (0:00:00.756) 0:01:09.366 ****** 2025-09-19 16:58:39.763233 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:58:39.763242 | orchestrator | 2025-09-19 16:58:39.763250 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-19 16:58:39.763259 | orchestrator | Friday 19 September 2025 16:57:22 +0000 (0:00:00.675) 0:01:10.041 ****** 2025-09-19 16:58:39.763267 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763276 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763285 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763293 | orchestrator | 2025-09-19 16:58:39.763302 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-19 16:58:39.763311 | orchestrator | Friday 19 September 2025 16:57:23 +0000 (0:00:01.234) 0:01:11.275 ****** 2025-09-19 16:58:39.763319 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763328 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763336 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763350 | orchestrator | 2025-09-19 16:58:39.763358 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-19 16:58:39.763367 | orchestrator | Friday 19 September 2025 16:57:24 +0000 (0:00:00.364) 0:01:11.640 ****** 2025-09-19 16:58:39.763376 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763385 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763393 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763402 | orchestrator | 2025-09-19 16:58:39.763410 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-19 16:58:39.763419 | orchestrator | Friday 19 September 2025 16:57:24 +0000 (0:00:00.393) 0:01:12.033 ****** 2025-09-19 16:58:39.763428 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763437 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763445 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763454 | orchestrator | 2025-09-19 16:58:39.763462 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-19 16:58:39.763471 | orchestrator | Friday 19 September 2025 16:57:25 +0000 (0:00:00.356) 0:01:12.390 ****** 2025-09-19 16:58:39.763480 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.763488 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.763497 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.763505 | orchestrator | 2025-09-19 16:58:39.763514 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-19 16:58:39.763523 | orchestrator | Friday 19 September 2025 16:57:25 +0000 (0:00:00.595) 0:01:12.985 ****** 2025-09-19 16:58:39.763531 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763540 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763549 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763557 | orchestrator | 2025-09-19 16:58:39.763566 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-19 16:58:39.763574 | orchestrator | Friday 19 September 2025 16:57:25 +0000 (0:00:00.336) 0:01:13.322 ****** 2025-09-19 16:58:39.763583 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763605 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763613 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763622 | orchestrator | 2025-09-19 16:58:39.763631 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-19 16:58:39.763640 | orchestrator | Friday 19 September 2025 16:57:26 +0000 (0:00:00.352) 0:01:13.675 ****** 2025-09-19 16:58:39.763649 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763657 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763666 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763674 | orchestrator | 2025-09-19 16:58:39.763683 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-19 16:58:39.763692 | orchestrator | Friday 19 September 2025 16:57:26 +0000 (0:00:00.350) 0:01:14.025 ****** 2025-09-19 16:58:39.763700 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763709 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763718 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763726 | orchestrator | 2025-09-19 16:58:39.763735 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-19 16:58:39.763748 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:00.455) 0:01:14.481 ****** 2025-09-19 16:58:39.763757 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763765 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763774 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763783 | orchestrator | 2025-09-19 16:58:39.763791 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-19 16:58:39.763800 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:00.343) 0:01:14.825 ****** 2025-09-19 16:58:39.763809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763818 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763827 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763855 | orchestrator | 2025-09-19 16:58:39.763864 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-19 16:58:39.763873 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:00.315) 0:01:15.140 ****** 2025-09-19 16:58:39.763881 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763890 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763899 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763907 | orchestrator | 2025-09-19 16:58:39.763916 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-19 16:58:39.763925 | orchestrator | Friday 19 September 2025 16:57:28 +0000 (0:00:00.301) 0:01:15.442 ****** 2025-09-19 16:58:39.763934 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763942 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.763951 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.763959 | orchestrator | 2025-09-19 16:58:39.763968 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-19 16:58:39.763977 | orchestrator | Friday 19 September 2025 16:57:28 +0000 (0:00:00.293) 0:01:15.735 ****** 2025-09-19 16:58:39.763986 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.763994 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764003 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764011 | orchestrator | 2025-09-19 16:58:39.764020 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-19 16:58:39.764028 | orchestrator | Friday 19 September 2025 16:57:28 +0000 (0:00:00.510) 0:01:16.246 ****** 2025-09-19 16:58:39.764037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764046 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764054 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764063 | orchestrator | 2025-09-19 16:58:39.764072 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-19 16:58:39.764080 | orchestrator | Friday 19 September 2025 16:57:29 +0000 (0:00:00.319) 0:01:16.566 ****** 2025-09-19 16:58:39.764089 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764097 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764112 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764121 | orchestrator | 2025-09-19 16:58:39.764129 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-19 16:58:39.764138 | orchestrator | Friday 19 September 2025 16:57:29 +0000 (0:00:00.281) 0:01:16.847 ****** 2025-09-19 16:58:39.764147 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764155 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764169 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764178 | orchestrator | 2025-09-19 16:58:39.764187 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-19 16:58:39.764195 | orchestrator | Friday 19 September 2025 16:57:29 +0000 (0:00:00.295) 0:01:17.143 ****** 2025-09-19 16:58:39.764204 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 16:58:39.764213 | orchestrator | 2025-09-19 16:58:39.764222 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-19 16:58:39.764230 | orchestrator | Friday 19 September 2025 16:57:30 +0000 (0:00:00.736) 0:01:17.879 ****** 2025-09-19 16:58:39.764239 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.764248 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.764256 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.764265 | orchestrator | 2025-09-19 16:58:39.764273 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-19 16:58:39.764282 | orchestrator | Friday 19 September 2025 16:57:30 +0000 (0:00:00.466) 0:01:18.345 ****** 2025-09-19 16:58:39.764290 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.764299 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.764308 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.764317 | orchestrator | 2025-09-19 16:58:39.764325 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-19 16:58:39.764334 | orchestrator | Friday 19 September 2025 16:57:31 +0000 (0:00:00.445) 0:01:18.791 ****** 2025-09-19 16:58:39.764342 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764351 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764359 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764368 | orchestrator | 2025-09-19 16:58:39.764377 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-19 16:58:39.764385 | orchestrator | Friday 19 September 2025 16:57:31 +0000 (0:00:00.515) 0:01:19.306 ****** 2025-09-19 16:58:39.764394 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764402 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764411 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764419 | orchestrator | 2025-09-19 16:58:39.764428 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-19 16:58:39.764437 | orchestrator | Friday 19 September 2025 16:57:32 +0000 (0:00:00.359) 0:01:19.666 ****** 2025-09-19 16:58:39.764445 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764454 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764463 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764471 | orchestrator | 2025-09-19 16:58:39.764480 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-19 16:58:39.764489 | orchestrator | Friday 19 September 2025 16:57:32 +0000 (0:00:00.345) 0:01:20.011 ****** 2025-09-19 16:58:39.764497 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764506 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764514 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764523 | orchestrator | 2025-09-19 16:58:39.764537 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-19 16:58:39.764546 | orchestrator | Friday 19 September 2025 16:57:33 +0000 (0:00:00.384) 0:01:20.396 ****** 2025-09-19 16:58:39.764554 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764572 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764586 | orchestrator | 2025-09-19 16:58:39.764594 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-19 16:58:39.764603 | orchestrator | Friday 19 September 2025 16:57:33 +0000 (0:00:00.602) 0:01:20.999 ****** 2025-09-19 16:58:39.764612 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.764620 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.764629 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.764637 | orchestrator | 2025-09-19 16:58:39.764646 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 16:58:39.764655 | orchestrator | Friday 19 September 2025 16:57:34 +0000 (0:00:00.520) 0:01:21.519 ****** 2025-09-19 16:58:39.764664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764770 | orchestrator | 2025-09-19 16:58:39.764779 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 16:58:39.764787 | orchestrator | Friday 19 September 2025 16:57:35 +0000 (0:00:01.437) 0:01:22.956 ****** 2025-09-19 16:58:39.764797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764875 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764941 | orchestrator | 2025-09-19 16:58:39.764950 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 16:58:39.764959 | orchestrator | Friday 19 September 2025 16:57:39 +0000 (0:00:03.542) 0:01:26.498 ****** 2025-09-19 16:58:39.764972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.764990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765065 | orchestrator | 2025-09-19 16:58:39.765075 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.765083 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:02.171) 0:01:28.670 ****** 2025-09-19 16:58:39.765092 | orchestrator | 2025-09-19 16:58:39.765101 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.765110 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:00.066) 0:01:28.736 ****** 2025-09-19 16:58:39.765119 | orchestrator | 2025-09-19 16:58:39.765128 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.765137 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:00.062) 0:01:28.799 ****** 2025-09-19 16:58:39.765145 | orchestrator | 2025-09-19 16:58:39.765154 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 16:58:39.765163 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:00.068) 0:01:28.868 ****** 2025-09-19 16:58:39.765171 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.765185 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.765194 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.765203 | orchestrator | 2025-09-19 16:58:39.765212 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 16:58:39.765221 | orchestrator | Friday 19 September 2025 16:57:48 +0000 (0:00:07.315) 0:01:36.183 ****** 2025-09-19 16:58:39.765230 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.765238 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.765247 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.765255 | orchestrator | 2025-09-19 16:58:39.765264 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 16:58:39.765273 | orchestrator | Friday 19 September 2025 16:57:55 +0000 (0:00:06.544) 0:01:42.728 ****** 2025-09-19 16:58:39.765281 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.765290 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.765299 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.765307 | orchestrator | 2025-09-19 16:58:39.765316 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 16:58:39.765324 | orchestrator | Friday 19 September 2025 16:57:57 +0000 (0:00:02.416) 0:01:45.144 ****** 2025-09-19 16:58:39.765333 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.765342 | orchestrator | 2025-09-19 16:58:39.765351 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 16:58:39.765359 | orchestrator | Friday 19 September 2025 16:57:58 +0000 (0:00:00.326) 0:01:45.470 ****** 2025-09-19 16:58:39.765368 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.765376 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.765385 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.765393 | orchestrator | 2025-09-19 16:58:39.765402 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 16:58:39.765411 | orchestrator | Friday 19 September 2025 16:57:58 +0000 (0:00:00.770) 0:01:46.240 ****** 2025-09-19 16:58:39.765419 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.765428 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.765436 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.765445 | orchestrator | 2025-09-19 16:58:39.765453 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 16:58:39.765462 | orchestrator | Friday 19 September 2025 16:57:59 +0000 (0:00:00.599) 0:01:46.840 ****** 2025-09-19 16:58:39.765471 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.765480 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.765488 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.765497 | orchestrator | 2025-09-19 16:58:39.765505 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 16:58:39.765520 | orchestrator | Friday 19 September 2025 16:58:00 +0000 (0:00:00.734) 0:01:47.574 ****** 2025-09-19 16:58:39.765529 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.765538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.765547 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.765555 | orchestrator | 2025-09-19 16:58:39.765564 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 16:58:39.765573 | orchestrator | Friday 19 September 2025 16:58:00 +0000 (0:00:00.610) 0:01:48.185 ****** 2025-09-19 16:58:39.765582 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.765590 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.765604 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.765613 | orchestrator | 2025-09-19 16:58:39.765622 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 16:58:39.765631 | orchestrator | Friday 19 September 2025 16:58:01 +0000 (0:00:00.952) 0:01:49.137 ****** 2025-09-19 16:58:39.765640 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.765649 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.765657 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.765666 | orchestrator | 2025-09-19 16:58:39.765674 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-19 16:58:39.765683 | orchestrator | Friday 19 September 2025 16:58:02 +0000 (0:00:00.783) 0:01:49.921 ****** 2025-09-19 16:58:39.765691 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.765700 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.765708 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.765717 | orchestrator | 2025-09-19 16:58:39.765725 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-19 16:58:39.765734 | orchestrator | Friday 19 September 2025 16:58:02 +0000 (0:00:00.282) 0:01:50.203 ****** 2025-09-19 16:58:39.765743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765752 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765761 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765776 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765785 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765795 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765809 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765818 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765845 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765855 | orchestrator | 2025-09-19 16:58:39.765864 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-19 16:58:39.765873 | orchestrator | Friday 19 September 2025 16:58:04 +0000 (0:00:01.398) 0:01:51.601 ****** 2025-09-19 16:58:39.765881 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765890 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765908 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765958 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765967 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.765976 | orchestrator | 2025-09-19 16:58:39.765985 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-19 16:58:39.765994 | orchestrator | Friday 19 September 2025 16:58:09 +0000 (0:00:05.187) 0:01:56.788 ****** 2025-09-19 16:58:39.766007 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766061 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766073 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766106 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766140 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 16:58:39.766149 | orchestrator | 2025-09-19 16:58:39.766158 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.766167 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:02.923) 0:01:59.712 ****** 2025-09-19 16:58:39.766175 | orchestrator | 2025-09-19 16:58:39.766184 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.766193 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:00.067) 0:01:59.779 ****** 2025-09-19 16:58:39.766201 | orchestrator | 2025-09-19 16:58:39.766210 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-19 16:58:39.766219 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:00.064) 0:01:59.844 ****** 2025-09-19 16:58:39.766227 | orchestrator | 2025-09-19 16:58:39.766236 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-19 16:58:39.766245 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:00.060) 0:01:59.904 ****** 2025-09-19 16:58:39.766253 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.766262 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.766271 | orchestrator | 2025-09-19 16:58:39.766285 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-19 16:58:39.766294 | orchestrator | Friday 19 September 2025 16:58:18 +0000 (0:00:06.270) 0:02:06.175 ****** 2025-09-19 16:58:39.766303 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.766311 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.766320 | orchestrator | 2025-09-19 16:58:39.766329 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-19 16:58:39.766337 | orchestrator | Friday 19 September 2025 16:58:25 +0000 (0:00:06.215) 0:02:12.391 ****** 2025-09-19 16:58:39.766346 | orchestrator | changed: [testbed-node-1] 2025-09-19 16:58:39.766355 | orchestrator | changed: [testbed-node-2] 2025-09-19 16:58:39.766363 | orchestrator | 2025-09-19 16:58:39.766372 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-19 16:58:39.766381 | orchestrator | Friday 19 September 2025 16:58:31 +0000 (0:00:06.447) 0:02:18.838 ****** 2025-09-19 16:58:39.766390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 16:58:39.766398 | orchestrator | 2025-09-19 16:58:39.766407 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-19 16:58:39.766416 | orchestrator | Friday 19 September 2025 16:58:31 +0000 (0:00:00.126) 0:02:18.965 ****** 2025-09-19 16:58:39.766424 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.766433 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.766442 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.766450 | orchestrator | 2025-09-19 16:58:39.766477 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-19 16:58:39.766486 | orchestrator | Friday 19 September 2025 16:58:32 +0000 (0:00:00.837) 0:02:19.802 ****** 2025-09-19 16:58:39.766494 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.766503 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.766512 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.766520 | orchestrator | 2025-09-19 16:58:39.766529 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-19 16:58:39.766538 | orchestrator | Friday 19 September 2025 16:58:33 +0000 (0:00:00.698) 0:02:20.500 ****** 2025-09-19 16:58:39.766547 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.766555 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.766564 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.766572 | orchestrator | 2025-09-19 16:58:39.766581 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-19 16:58:39.766590 | orchestrator | Friday 19 September 2025 16:58:33 +0000 (0:00:00.778) 0:02:21.279 ****** 2025-09-19 16:58:39.766598 | orchestrator | skipping: [testbed-node-1] 2025-09-19 16:58:39.766607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 16:58:39.766616 | orchestrator | changed: [testbed-node-0] 2025-09-19 16:58:39.766624 | orchestrator | 2025-09-19 16:58:39.766633 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-19 16:58:39.766642 | orchestrator | Friday 19 September 2025 16:58:34 +0000 (0:00:00.833) 0:02:22.112 ****** 2025-09-19 16:58:39.766650 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.766659 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.766668 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.766676 | orchestrator | 2025-09-19 16:58:39.766689 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-19 16:58:39.766699 | orchestrator | Friday 19 September 2025 16:58:35 +0000 (0:00:00.770) 0:02:22.883 ****** 2025-09-19 16:58:39.766707 | orchestrator | ok: [testbed-node-0] 2025-09-19 16:58:39.766716 | orchestrator | ok: [testbed-node-1] 2025-09-19 16:58:39.766725 | orchestrator | ok: [testbed-node-2] 2025-09-19 16:58:39.766734 | orchestrator | 2025-09-19 16:58:39.766742 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 16:58:39.766752 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 16:58:39.766761 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 16:58:39.766770 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-19 16:58:39.766779 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:58:39.766788 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:58:39.766800 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 16:58:39.766815 | orchestrator | 2025-09-19 16:58:39.766831 | orchestrator | 2025-09-19 16:58:39.766868 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 16:58:39.766883 | orchestrator | Friday 19 September 2025 16:58:36 +0000 (0:00:01.137) 0:02:24.020 ****** 2025-09-19 16:58:39.766892 | orchestrator | =============================================================================== 2025-09-19 16:58:39.766901 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 30.87s 2025-09-19 16:58:39.766910 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.79s 2025-09-19 16:58:39.766918 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.59s 2025-09-19 16:58:39.766935 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 12.76s 2025-09-19 16:58:39.766943 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.86s 2025-09-19 16:58:39.766952 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.19s 2025-09-19 16:58:39.766960 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 4.04s 2025-09-19 16:58:39.766975 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.54s 2025-09-19 16:58:39.766984 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.92s 2025-09-19 16:58:39.766992 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.58s 2025-09-19 16:58:39.767001 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.17s 2025-09-19 16:58:39.767010 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.75s 2025-09-19 16:58:39.767018 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.74s 2025-09-19 16:58:39.767027 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.65s 2025-09-19 16:58:39.767035 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.53s 2025-09-19 16:58:39.767044 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.49s 2025-09-19 16:58:39.767052 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.44s 2025-09-19 16:58:39.767061 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.44s 2025-09-19 16:58:39.767069 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-09-19 16:58:39.767078 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.23s 2025-09-19 16:58:39.767087 | orchestrator | 2025-09-19 16:58:39 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:39.767095 | orchestrator | 2025-09-19 16:58:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:42.803356 | orchestrator | 2025-09-19 16:58:42 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:42.804304 | orchestrator | 2025-09-19 16:58:42 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:42.804319 | orchestrator | 2025-09-19 16:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:45.845153 | orchestrator | 2025-09-19 16:58:45 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:45.846294 | orchestrator | 2025-09-19 16:58:45 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:45.846364 | orchestrator | 2025-09-19 16:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:48.888970 | orchestrator | 2025-09-19 16:58:48 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:48.892505 | orchestrator | 2025-09-19 16:58:48 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:48.892528 | orchestrator | 2025-09-19 16:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:51.936225 | orchestrator | 2025-09-19 16:58:51 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:51.937055 | orchestrator | 2025-09-19 16:58:51 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:51.937110 | orchestrator | 2025-09-19 16:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:55.002929 | orchestrator | 2025-09-19 16:58:54 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:55.005284 | orchestrator | 2025-09-19 16:58:55 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:55.005357 | orchestrator | 2025-09-19 16:58:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:58:58.054484 | orchestrator | 2025-09-19 16:58:58 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:58:58.054580 | orchestrator | 2025-09-19 16:58:58 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:58:58.054631 | orchestrator | 2025-09-19 16:58:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:01.091958 | orchestrator | 2025-09-19 16:59:01 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:01.092479 | orchestrator | 2025-09-19 16:59:01 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:01.092740 | orchestrator | 2025-09-19 16:59:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:04.129689 | orchestrator | 2025-09-19 16:59:04 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:04.130633 | orchestrator | 2025-09-19 16:59:04 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:04.130935 | orchestrator | 2025-09-19 16:59:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:07.180001 | orchestrator | 2025-09-19 16:59:07 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:07.180105 | orchestrator | 2025-09-19 16:59:07 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:07.180119 | orchestrator | 2025-09-19 16:59:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:10.229631 | orchestrator | 2025-09-19 16:59:10 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:10.229710 | orchestrator | 2025-09-19 16:59:10 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:10.229719 | orchestrator | 2025-09-19 16:59:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:13.275815 | orchestrator | 2025-09-19 16:59:13 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:13.278362 | orchestrator | 2025-09-19 16:59:13 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:13.278682 | orchestrator | 2025-09-19 16:59:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:16.333893 | orchestrator | 2025-09-19 16:59:16 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:16.336196 | orchestrator | 2025-09-19 16:59:16 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:16.336409 | orchestrator | 2025-09-19 16:59:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:19.370809 | orchestrator | 2025-09-19 16:59:19 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:19.371091 | orchestrator | 2025-09-19 16:59:19 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:19.371113 | orchestrator | 2025-09-19 16:59:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:22.420157 | orchestrator | 2025-09-19 16:59:22 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:22.422206 | orchestrator | 2025-09-19 16:59:22 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:22.422554 | orchestrator | 2025-09-19 16:59:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:25.458949 | orchestrator | 2025-09-19 16:59:25 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:25.460621 | orchestrator | 2025-09-19 16:59:25 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:25.461047 | orchestrator | 2025-09-19 16:59:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:28.498979 | orchestrator | 2025-09-19 16:59:28 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:28.500994 | orchestrator | 2025-09-19 16:59:28 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:28.501703 | orchestrator | 2025-09-19 16:59:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:31.547019 | orchestrator | 2025-09-19 16:59:31 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:31.549467 | orchestrator | 2025-09-19 16:59:31 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:31.549502 | orchestrator | 2025-09-19 16:59:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:34.599914 | orchestrator | 2025-09-19 16:59:34 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:34.600794 | orchestrator | 2025-09-19 16:59:34 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:34.601812 | orchestrator | 2025-09-19 16:59:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:37.640223 | orchestrator | 2025-09-19 16:59:37 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:37.640326 | orchestrator | 2025-09-19 16:59:37 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:37.640341 | orchestrator | 2025-09-19 16:59:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:40.683538 | orchestrator | 2025-09-19 16:59:40 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:40.684367 | orchestrator | 2025-09-19 16:59:40 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:40.685002 | orchestrator | 2025-09-19 16:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:43.721789 | orchestrator | 2025-09-19 16:59:43 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:43.722161 | orchestrator | 2025-09-19 16:59:43 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:43.722189 | orchestrator | 2025-09-19 16:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:46.766643 | orchestrator | 2025-09-19 16:59:46 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:46.766908 | orchestrator | 2025-09-19 16:59:46 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:46.767336 | orchestrator | 2025-09-19 16:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:49.810314 | orchestrator | 2025-09-19 16:59:49 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:49.810676 | orchestrator | 2025-09-19 16:59:49 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:49.810705 | orchestrator | 2025-09-19 16:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:52.856573 | orchestrator | 2025-09-19 16:59:52 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:52.857925 | orchestrator | 2025-09-19 16:59:52 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:52.857957 | orchestrator | 2025-09-19 16:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:55.899166 | orchestrator | 2025-09-19 16:59:55 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:55.901247 | orchestrator | 2025-09-19 16:59:55 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:55.901342 | orchestrator | 2025-09-19 16:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 16:59:58.944399 | orchestrator | 2025-09-19 16:59:58 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 16:59:58.944518 | orchestrator | 2025-09-19 16:59:58 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 16:59:58.944542 | orchestrator | 2025-09-19 16:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:01.979355 | orchestrator | 2025-09-19 17:00:01 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:01.981195 | orchestrator | 2025-09-19 17:00:01 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:01.981275 | orchestrator | 2025-09-19 17:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:05.024827 | orchestrator | 2025-09-19 17:00:05 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:05.025049 | orchestrator | 2025-09-19 17:00:05 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:05.025079 | orchestrator | 2025-09-19 17:00:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:08.063886 | orchestrator | 2025-09-19 17:00:08 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:08.065155 | orchestrator | 2025-09-19 17:00:08 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:08.065358 | orchestrator | 2025-09-19 17:00:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:11.110579 | orchestrator | 2025-09-19 17:00:11 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:11.113138 | orchestrator | 2025-09-19 17:00:11 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:11.113170 | orchestrator | 2025-09-19 17:00:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:14.146114 | orchestrator | 2025-09-19 17:00:14 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:14.147774 | orchestrator | 2025-09-19 17:00:14 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:14.147822 | orchestrator | 2025-09-19 17:00:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:17.193080 | orchestrator | 2025-09-19 17:00:17 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:17.194524 | orchestrator | 2025-09-19 17:00:17 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:17.194560 | orchestrator | 2025-09-19 17:00:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:20.231147 | orchestrator | 2025-09-19 17:00:20 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:20.231322 | orchestrator | 2025-09-19 17:00:20 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:20.231341 | orchestrator | 2025-09-19 17:00:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:23.273296 | orchestrator | 2025-09-19 17:00:23 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:23.275239 | orchestrator | 2025-09-19 17:00:23 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:23.275301 | orchestrator | 2025-09-19 17:00:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:26.326689 | orchestrator | 2025-09-19 17:00:26 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:26.326980 | orchestrator | 2025-09-19 17:00:26 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:26.326997 | orchestrator | 2025-09-19 17:00:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:29.372274 | orchestrator | 2025-09-19 17:00:29 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:29.373049 | orchestrator | 2025-09-19 17:00:29 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:29.373080 | orchestrator | 2025-09-19 17:00:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:32.424683 | orchestrator | 2025-09-19 17:00:32 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:32.426472 | orchestrator | 2025-09-19 17:00:32 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:32.426512 | orchestrator | 2025-09-19 17:00:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:35.478272 | orchestrator | 2025-09-19 17:00:35 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:35.480053 | orchestrator | 2025-09-19 17:00:35 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:35.480074 | orchestrator | 2025-09-19 17:00:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:38.528312 | orchestrator | 2025-09-19 17:00:38 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:38.531380 | orchestrator | 2025-09-19 17:00:38 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:38.531459 | orchestrator | 2025-09-19 17:00:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:41.566083 | orchestrator | 2025-09-19 17:00:41 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:41.567195 | orchestrator | 2025-09-19 17:00:41 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:41.567469 | orchestrator | 2025-09-19 17:00:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:44.612009 | orchestrator | 2025-09-19 17:00:44 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:44.613601 | orchestrator | 2025-09-19 17:00:44 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:44.613628 | orchestrator | 2025-09-19 17:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:47.646919 | orchestrator | 2025-09-19 17:00:47 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:47.648922 | orchestrator | 2025-09-19 17:00:47 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:47.649009 | orchestrator | 2025-09-19 17:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:50.695799 | orchestrator | 2025-09-19 17:00:50 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:50.697229 | orchestrator | 2025-09-19 17:00:50 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:50.697262 | orchestrator | 2025-09-19 17:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:53.739677 | orchestrator | 2025-09-19 17:00:53 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:53.741723 | orchestrator | 2025-09-19 17:00:53 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:53.741773 | orchestrator | 2025-09-19 17:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:56.780950 | orchestrator | 2025-09-19 17:00:56 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:56.781953 | orchestrator | 2025-09-19 17:00:56 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:56.781982 | orchestrator | 2025-09-19 17:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:00:59.827586 | orchestrator | 2025-09-19 17:00:59 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:00:59.829330 | orchestrator | 2025-09-19 17:00:59 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:00:59.829821 | orchestrator | 2025-09-19 17:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:02.884663 | orchestrator | 2025-09-19 17:01:02 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:02.887456 | orchestrator | 2025-09-19 17:01:02 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:02.887550 | orchestrator | 2025-09-19 17:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:05.918115 | orchestrator | 2025-09-19 17:01:05 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:05.919219 | orchestrator | 2025-09-19 17:01:05 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:05.919667 | orchestrator | 2025-09-19 17:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:08.968543 | orchestrator | 2025-09-19 17:01:08 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:08.969820 | orchestrator | 2025-09-19 17:01:08 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:08.970162 | orchestrator | 2025-09-19 17:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:12.009802 | orchestrator | 2025-09-19 17:01:12 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:12.011621 | orchestrator | 2025-09-19 17:01:12 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:12.011816 | orchestrator | 2025-09-19 17:01:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:15.049902 | orchestrator | 2025-09-19 17:01:15 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:15.050288 | orchestrator | 2025-09-19 17:01:15 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:15.050314 | orchestrator | 2025-09-19 17:01:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:18.095514 | orchestrator | 2025-09-19 17:01:18 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:18.096780 | orchestrator | 2025-09-19 17:01:18 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:18.096927 | orchestrator | 2025-09-19 17:01:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:21.142946 | orchestrator | 2025-09-19 17:01:21 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:21.144306 | orchestrator | 2025-09-19 17:01:21 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:21.144732 | orchestrator | 2025-09-19 17:01:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:24.183088 | orchestrator | 2025-09-19 17:01:24 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:24.183184 | orchestrator | 2025-09-19 17:01:24 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:24.183198 | orchestrator | 2025-09-19 17:01:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:27.222087 | orchestrator | 2025-09-19 17:01:27 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state STARTED 2025-09-19 17:01:27.223070 | orchestrator | 2025-09-19 17:01:27 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:27.223148 | orchestrator | 2025-09-19 17:01:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:30.276089 | orchestrator | 2025-09-19 17:01:30 | INFO  | Task 9d33c48d-ceab-4170-8306-748a0dda328c is in state SUCCESS 2025-09-19 17:01:30.276960 | orchestrator | 2025-09-19 17:01:30.277055 | orchestrator | 2025-09-19 17:01:30.277070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:01:30.277081 | orchestrator | 2025-09-19 17:01:30.277093 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:01:30.277104 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.565) 0:00:00.565 ****** 2025-09-19 17:01:30.277115 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.277126 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.277137 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.277148 | orchestrator | 2025-09-19 17:01:30.277159 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:01:30.277170 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:00.747) 0:00:01.313 ****** 2025-09-19 17:01:30.277181 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-19 17:01:30.277247 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-19 17:01:30.277260 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-19 17:01:30.277271 | orchestrator | 2025-09-19 17:01:30.277282 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-19 17:01:30.277293 | orchestrator | 2025-09-19 17:01:30.277304 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 17:01:30.277315 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.840) 0:00:02.153 ****** 2025-09-19 17:01:30.277326 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.277337 | orchestrator | 2025-09-19 17:01:30.277347 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-19 17:01:30.277358 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:01.223) 0:00:03.377 ****** 2025-09-19 17:01:30.277369 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.277380 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.277390 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.277401 | orchestrator | 2025-09-19 17:01:30.277412 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 17:01:30.277422 | orchestrator | Friday 19 September 2025 16:55:16 +0000 (0:00:00.988) 0:00:04.366 ****** 2025-09-19 17:01:30.277433 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.277443 | orchestrator | 2025-09-19 17:01:30.277546 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-19 17:01:30.277558 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:00.726) 0:00:05.093 ****** 2025-09-19 17:01:30.277571 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.277584 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.277596 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.277607 | orchestrator | 2025-09-19 17:01:30.277620 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-19 17:01:30.277677 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:00.796) 0:00:05.890 ****** 2025-09-19 17:01:30.277784 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277799 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277812 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277824 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277837 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 17:01:30.277870 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 17:01:30.277898 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277911 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 17:01:30.277924 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 17:01:30.277937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-19 17:01:30.277949 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-19 17:01:30.277960 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-19 17:01:30.277970 | orchestrator | 2025-09-19 17:01:30.277981 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 17:01:30.277992 | orchestrator | Friday 19 September 2025 16:55:23 +0000 (0:00:05.694) 0:00:11.585 ****** 2025-09-19 17:01:30.278003 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 17:01:30.278144 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 17:01:30.278163 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 17:01:30.278174 | orchestrator | 2025-09-19 17:01:30.278186 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 17:01:30.278197 | orchestrator | Friday 19 September 2025 16:55:24 +0000 (0:00:00.711) 0:00:12.296 ****** 2025-09-19 17:01:30.278208 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-19 17:01:30.278219 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-19 17:01:30.278230 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-19 17:01:30.278240 | orchestrator | 2025-09-19 17:01:30.278251 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 17:01:30.278263 | orchestrator | Friday 19 September 2025 16:55:25 +0000 (0:00:01.472) 0:00:13.768 ****** 2025-09-19 17:01:30.278274 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-19 17:01:30.278285 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.278312 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-19 17:01:30.278324 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.278335 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-19 17:01:30.278345 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.278356 | orchestrator | 2025-09-19 17:01:30.278367 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-19 17:01:30.278379 | orchestrator | Friday 19 September 2025 16:55:26 +0000 (0:00:00.740) 0:00:14.508 ****** 2025-09-19 17:01:30.278426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.278700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.278715 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.278734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.278768 | orchestrator | 2025-09-19 17:01:30.278782 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-19 17:01:30.278793 | orchestrator | Friday 19 September 2025 16:55:28 +0000 (0:00:02.348) 0:00:16.857 ****** 2025-09-19 17:01:30.278804 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.278815 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.278826 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.278836 | orchestrator | 2025-09-19 17:01:30.278865 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-19 17:01:30.278876 | orchestrator | Friday 19 September 2025 16:55:30 +0000 (0:00:01.142) 0:00:18.000 ****** 2025-09-19 17:01:30.278887 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-19 17:01:30.278898 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-19 17:01:30.278909 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-19 17:01:30.278920 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-19 17:01:30.278953 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-19 17:01:30.278965 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-19 17:01:30.278976 | orchestrator | 2025-09-19 17:01:30.278988 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-19 17:01:30.278999 | orchestrator | Friday 19 September 2025 16:55:32 +0000 (0:00:02.644) 0:00:20.644 ****** 2025-09-19 17:01:30.279009 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.279020 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.279031 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.279148 | orchestrator | 2025-09-19 17:01:30.279167 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-19 17:01:30.279178 | orchestrator | Friday 19 September 2025 16:55:34 +0000 (0:00:01.488) 0:00:22.133 ****** 2025-09-19 17:01:30.279234 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.279245 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.279280 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.279292 | orchestrator | 2025-09-19 17:01:30.279354 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-19 17:01:30.279366 | orchestrator | Friday 19 September 2025 16:55:35 +0000 (0:00:01.355) 0:00:23.488 ****** 2025-09-19 17:01:30.279378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.279434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.279454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.279466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.279478 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.279489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.279555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.279568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.279580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.279599 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.279653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.279667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.279678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.279690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.279701 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.279712 | orchestrator | 2025-09-19 17:01:30.279723 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-19 17:01:30.279734 | orchestrator | Friday 19 September 2025 16:55:36 +0000 (0:00:00.490) 0:00:23.979 ****** 2025-09-19 17:01:30.279751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.279764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.279791 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.279803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.279814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.279825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.279837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.280107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.280130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.280153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2', '__omit_place_holder__a5d06e2251305ebb734ef9d177583105eea74ce2'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-19 17:01:30.280164 | orchestrator | 2025-09-19 17:01:30.280176 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-19 17:01:30.280187 | orchestrator | Friday 19 September 2025 16:55:40 +0000 (0:00:04.699) 0:00:28.679 ****** 2025-09-19 17:01:30.280198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280312 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.280381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.280391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.280406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.280422 | orchestrator | 2025-09-19 17:01:30.280433 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-19 17:01:30.280443 | orchestrator | Friday 19 September 2025 16:55:44 +0000 (0:00:03.621) 0:00:32.300 ****** 2025-09-19 17:01:30.280453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 17:01:30.280463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 17:01:30.280473 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-19 17:01:30.280483 | orchestrator | 2025-09-19 17:01:30.280492 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-19 17:01:30.280502 | orchestrator | Friday 19 September 2025 16:55:46 +0000 (0:00:02.502) 0:00:34.802 ****** 2025-09-19 17:01:30.280512 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 17:01:30.280522 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 17:01:30.280586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-19 17:01:30.280596 | orchestrator | 2025-09-19 17:01:30.280612 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-19 17:01:30.280622 | orchestrator | Friday 19 September 2025 16:55:52 +0000 (0:00:05.471) 0:00:40.276 ****** 2025-09-19 17:01:30.280631 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.280667 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.280689 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.280699 | orchestrator | 2025-09-19 17:01:30.280709 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-19 17:01:30.280718 | orchestrator | Friday 19 September 2025 16:55:53 +0000 (0:00:01.053) 0:00:41.330 ****** 2025-09-19 17:01:30.280728 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 17:01:30.280739 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 17:01:30.280761 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-19 17:01:30.280771 | orchestrator | 2025-09-19 17:01:30.280780 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-19 17:01:30.280790 | orchestrator | Friday 19 September 2025 16:55:56 +0000 (0:00:02.698) 0:00:44.029 ****** 2025-09-19 17:01:30.280877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 17:01:30.280922 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 17:01:30.280933 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-19 17:01:30.280942 | orchestrator | 2025-09-19 17:01:30.280952 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-19 17:01:30.280962 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:04.482) 0:00:48.512 ****** 2025-09-19 17:01:30.280972 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-19 17:01:30.280981 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-19 17:01:30.281078 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-19 17:01:30.281090 | orchestrator | 2025-09-19 17:01:30.281100 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-19 17:01:30.281109 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:01.873) 0:00:50.386 ****** 2025-09-19 17:01:30.281119 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-19 17:01:30.281128 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-19 17:01:30.281138 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-19 17:01:30.281147 | orchestrator | 2025-09-19 17:01:30.281157 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-19 17:01:30.281167 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:02.361) 0:00:52.747 ****** 2025-09-19 17:01:30.281177 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.281186 | orchestrator | 2025-09-19 17:01:30.281196 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-19 17:01:30.281206 | orchestrator | Friday 19 September 2025 16:56:05 +0000 (0:00:00.603) 0:00:53.351 ****** 2025-09-19 17:01:30.281221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.281335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.281346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.281356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.281415 | orchestrator | 2025-09-19 17:01:30.281425 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-19 17:01:30.281435 | orchestrator | Friday 19 September 2025 16:56:09 +0000 (0:00:04.414) 0:00:57.765 ****** 2025-09-19 17:01:30.281454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281517 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.281527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.281624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281758 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.281768 | orchestrator | 2025-09-19 17:01:30.281823 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-19 17:01:30.281834 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:00.994) 0:00:58.759 ****** 2025-09-19 17:01:30.281863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.281909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281952 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.281962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.281972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.281983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.281997 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.282007 | orchestrator | 2025-09-19 17:01:30.282059 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 17:01:30.282072 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.872) 0:00:59.631 ****** 2025-09-19 17:01:30.282083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282206 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.282216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282329 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.282348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282391 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.282401 | orchestrator | 2025-09-19 17:01:30.282410 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 17:01:30.282420 | orchestrator | Friday 19 September 2025 16:56:12 +0000 (0:00:00.844) 0:01:00.476 ****** 2025-09-19 17:01:30.282430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282460 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.282474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.282527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282557 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.282567 | orchestrator | 2025-09-19 17:01:30.282576 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 17:01:30.282586 | orchestrator | Friday 19 September 2025 16:56:13 +0000 (0:00:00.743) 0:01:01.220 ****** 2025-09-19 17:01:30.282596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282653 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.282663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282682 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.282799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.282898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.282910 | orchestrator | 2025-09-19 17:01:30.282920 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-19 17:01:30.282930 | orchestrator | Friday 19 September 2025 16:56:14 +0000 (0:00:00.728) 0:01:01.948 ****** 2025-09-19 17:01:30.282962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.282980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.282991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.283011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283053 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.283062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.283107 | orchestrator | 2025-09-19 17:01:30.283117 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-19 17:01:30.283127 | orchestrator | Friday 19 September 2025 16:56:14 +0000 (0:00:00.755) 0:01:02.703 ****** 2025-09-19 17:01:30.283137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.283187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283225 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.283235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283271 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.283280 | orchestrator | 2025-09-19 17:01:30.283290 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-19 17:01:30.283300 | orchestrator | Friday 19 September 2025 16:56:15 +0000 (0:00:00.644) 0:01:03.348 ****** 2025-09-19 17:01:30.283314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283344 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.283360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283396 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.283406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-19 17:01:30.283424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-19 17:01:30.283435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-19 17:01:30.283444 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.283454 | orchestrator | 2025-09-19 17:01:30.283464 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-19 17:01:30.283474 | orchestrator | Friday 19 September 2025 16:56:16 +0000 (0:00:01.218) 0:01:04.567 ****** 2025-09-19 17:01:30.283483 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 17:01:30.283493 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 17:01:30.283508 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-19 17:01:30.283518 | orchestrator | 2025-09-19 17:01:30.283527 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-19 17:01:30.283537 | orchestrator | Friday 19 September 2025 16:56:18 +0000 (0:00:02.037) 0:01:06.604 ****** 2025-09-19 17:01:30.283547 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 17:01:30.283556 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 17:01:30.283566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-19 17:01:30.283576 | orchestrator | 2025-09-19 17:01:30.283585 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-19 17:01:30.283595 | orchestrator | Friday 19 September 2025 16:56:20 +0000 (0:00:01.738) 0:01:08.342 ****** 2025-09-19 17:01:30.283604 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:01:30.283614 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:01:30.283623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:01:30.283633 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.283648 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:01:30.283658 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.283667 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:01:30.283677 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:01:30.283686 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.283696 | orchestrator | 2025-09-19 17:01:30.283706 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-19 17:01:30.283715 | orchestrator | Friday 19 September 2025 16:56:21 +0000 (0:00:01.377) 0:01:09.719 ****** 2025-09-19 17:01:30.283725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-19 17:01:30.283802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.283812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.283827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-19 17:01:30.283837 | orchestrator | 2025-09-19 17:01:30.283938 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-19 17:01:30.283949 | orchestrator | Friday 19 September 2025 16:56:24 +0000 (0:00:02.881) 0:01:12.601 ****** 2025-09-19 17:01:30.283959 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.283969 | orchestrator | 2025-09-19 17:01:30.283978 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-19 17:01:30.283988 | orchestrator | Friday 19 September 2025 16:56:25 +0000 (0:00:00.668) 0:01:13.269 ****** 2025-09-19 17:01:30.283999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 17:01:30.284018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 17:01:30.284067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-19 17:01:30.284148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284182 | orchestrator | 2025-09-19 17:01:30.284192 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-19 17:01:30.284202 | orchestrator | Friday 19 September 2025 16:56:31 +0000 (0:00:05.766) 0:01:19.036 ****** 2025-09-19 17:01:30.284212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 17:01:30.284229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284260 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.284268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 17:01:30.284280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284310 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.284323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-19 17:01:30.284332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.284340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284356 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.284364 | orchestrator | 2025-09-19 17:01:30.284376 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-19 17:01:30.284384 | orchestrator | Friday 19 September 2025 16:56:31 +0000 (0:00:00.783) 0:01:19.820 ****** 2025-09-19 17:01:30.284392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284409 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.284417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284439 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.284446 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-19 17:01:30.284463 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.284470 | orchestrator | 2025-09-19 17:01:30.284483 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-19 17:01:30.284491 | orchestrator | Friday 19 September 2025 16:56:32 +0000 (0:00:00.884) 0:01:20.704 ****** 2025-09-19 17:01:30.284499 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.284507 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.284514 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.284522 | orchestrator | 2025-09-19 17:01:30.284530 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-19 17:01:30.284538 | orchestrator | Friday 19 September 2025 16:56:34 +0000 (0:00:01.272) 0:01:21.977 ****** 2025-09-19 17:01:30.284546 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.284554 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.284561 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.284569 | orchestrator | 2025-09-19 17:01:30.284577 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-19 17:01:30.284585 | orchestrator | Friday 19 September 2025 16:56:36 +0000 (0:00:02.019) 0:01:23.997 ****** 2025-09-19 17:01:30.284593 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.284600 | orchestrator | 2025-09-19 17:01:30.284608 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-19 17:01:30.284616 | orchestrator | Friday 19 September 2025 16:56:36 +0000 (0:00:00.739) 0:01:24.736 ****** 2025-09-19 17:01:30.284625 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.284634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.284677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.284685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284741 | orchestrator | 2025-09-19 17:01:30.284749 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-19 17:01:30.284757 | orchestrator | Friday 19 September 2025 16:56:40 +0000 (0:00:03.895) 0:01:28.632 ****** 2025-09-19 17:01:30.284770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.284779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.284804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.284821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284837 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.284864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.284873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.284890 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.284898 | orchestrator | 2025-09-19 17:01:30.284911 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-19 17:01:30.284919 | orchestrator | Friday 19 September 2025 16:56:41 +0000 (0:00:00.679) 0:01:29.312 ****** 2025-09-19 17:01:30.284928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284945 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.284957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284973 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.284981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-19 17:01:30.284997 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285005 | orchestrator | 2025-09-19 17:01:30.285013 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-19 17:01:30.285020 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:00.831) 0:01:30.143 ****** 2025-09-19 17:01:30.285028 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.285036 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.285044 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.285052 | orchestrator | 2025-09-19 17:01:30.285059 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-19 17:01:30.285067 | orchestrator | Friday 19 September 2025 16:56:43 +0000 (0:00:01.282) 0:01:31.426 ****** 2025-09-19 17:01:30.285075 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.285082 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.285090 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.285098 | orchestrator | 2025-09-19 17:01:30.285110 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-19 17:01:30.285118 | orchestrator | Friday 19 September 2025 16:56:45 +0000 (0:00:01.981) 0:01:33.408 ****** 2025-09-19 17:01:30.285126 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285134 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285141 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285149 | orchestrator | 2025-09-19 17:01:30.285157 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-19 17:01:30.285165 | orchestrator | Friday 19 September 2025 16:56:45 +0000 (0:00:00.302) 0:01:33.710 ****** 2025-09-19 17:01:30.285173 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.285181 | orchestrator | 2025-09-19 17:01:30.285188 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-19 17:01:30.285196 | orchestrator | Friday 19 September 2025 16:56:46 +0000 (0:00:00.835) 0:01:34.546 ****** 2025-09-19 17:01:30.285205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 17:01:30.285219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 17:01:30.285231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-19 17:01:30.285239 | orchestrator | 2025-09-19 17:01:30.285247 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-19 17:01:30.285255 | orchestrator | Friday 19 September 2025 16:56:49 +0000 (0:00:02.608) 0:01:37.155 ****** 2025-09-19 17:01:30.285268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 17:01:30.285276 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 17:01:30.285298 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-19 17:01:30.285314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285322 | orchestrator | 2025-09-19 17:01:30.285330 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-19 17:01:30.285338 | orchestrator | Friday 19 September 2025 16:56:50 +0000 (0:00:01.562) 0:01:38.717 ****** 2025-09-19 17:01:30.285346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285377 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-19 17:01:30.285428 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285435 | orchestrator | 2025-09-19 17:01:30.285443 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-19 17:01:30.285451 | orchestrator | Friday 19 September 2025 16:56:52 +0000 (0:00:01.982) 0:01:40.700 ****** 2025-09-19 17:01:30.285459 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285467 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285474 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285482 | orchestrator | 2025-09-19 17:01:30.285490 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-19 17:01:30.285498 | orchestrator | Friday 19 September 2025 16:56:53 +0000 (0:00:00.692) 0:01:41.392 ****** 2025-09-19 17:01:30.285506 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285513 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285521 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285529 | orchestrator | 2025-09-19 17:01:30.285537 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-19 17:01:30.285545 | orchestrator | Friday 19 September 2025 16:56:54 +0000 (0:00:01.083) 0:01:42.476 ****** 2025-09-19 17:01:30.285552 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.285560 | orchestrator | 2025-09-19 17:01:30.285568 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-19 17:01:30.285576 | orchestrator | Friday 19 September 2025 16:56:55 +0000 (0:00:00.730) 0:01:43.206 ****** 2025-09-19 17:01:30.285584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.285603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.285647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.285697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285722 | orchestrator | 2025-09-19 17:01:30.285730 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-19 17:01:30.285738 | orchestrator | Friday 19 September 2025 16:56:59 +0000 (0:00:04.091) 0:01:47.298 ****** 2025-09-19 17:01:30.285749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.285764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.285813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285865 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.285874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.285882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.285915 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.285923 | orchestrator | 2025-09-19 17:01:30.285931 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-19 17:01:30.285939 | orchestrator | Friday 19 September 2025 16:57:00 +0000 (0:00:00.919) 0:01:48.218 ****** 2025-09-19 17:01:30.285947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.285960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.285969 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.285977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.285985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.285993 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.286001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.286009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-19 17:01:30.286042 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.286051 | orchestrator | 2025-09-19 17:01:30.286059 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-19 17:01:30.286067 | orchestrator | Friday 19 September 2025 16:57:01 +0000 (0:00:00.874) 0:01:49.092 ****** 2025-09-19 17:01:30.286075 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.286083 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.286091 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.286099 | orchestrator | 2025-09-19 17:01:30.286107 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-19 17:01:30.286115 | orchestrator | Friday 19 September 2025 16:57:02 +0000 (0:00:01.317) 0:01:50.410 ****** 2025-09-19 17:01:30.286122 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.286130 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.286138 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.286146 | orchestrator | 2025-09-19 17:01:30.286154 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-19 17:01:30.286161 | orchestrator | Friday 19 September 2025 16:57:04 +0000 (0:00:01.962) 0:01:52.372 ****** 2025-09-19 17:01:30.286169 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.286177 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.286185 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.286193 | orchestrator | 2025-09-19 17:01:30.286201 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-19 17:01:30.286214 | orchestrator | Friday 19 September 2025 16:57:04 +0000 (0:00:00.467) 0:01:52.840 ****** 2025-09-19 17:01:30.286222 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.286229 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.286237 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.286245 | orchestrator | 2025-09-19 17:01:30.286253 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-19 17:01:30.286261 | orchestrator | Friday 19 September 2025 16:57:05 +0000 (0:00:00.296) 0:01:53.137 ****** 2025-09-19 17:01:30.286269 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.286276 | orchestrator | 2025-09-19 17:01:30.286284 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-19 17:01:30.286292 | orchestrator | Friday 19 September 2025 16:57:06 +0000 (0:00:00.770) 0:01:53.907 ****** 2025-09-19 17:01:30.286305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:01:30.286327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:01:30.286378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:01:30.286428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286528 | orchestrator | 2025-09-19 17:01:30.286536 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-19 17:01:30.286544 | orchestrator | Friday 19 September 2025 16:57:09 +0000 (0:00:03.779) 0:01:57.686 ****** 2025-09-19 17:01:30.286557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:01:30.286565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286632 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.286640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:01:30.286653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:01:30.286711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:01:30.286738 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.286750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.286816 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.286824 | orchestrator | 2025-09-19 17:01:30.286832 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-19 17:01:30.286840 | orchestrator | Friday 19 September 2025 16:57:10 +0000 (0:00:00.846) 0:01:58.533 ****** 2025-09-19 17:01:30.286891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286908 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.286916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286932 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.286940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-19 17:01:30.286956 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.286964 | orchestrator | 2025-09-19 17:01:30.286972 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-19 17:01:30.286985 | orchestrator | Friday 19 September 2025 16:57:11 +0000 (0:00:00.972) 0:01:59.506 ****** 2025-09-19 17:01:30.286993 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287001 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287009 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287017 | orchestrator | 2025-09-19 17:01:30.287024 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-19 17:01:30.287033 | orchestrator | Friday 19 September 2025 16:57:12 +0000 (0:00:01.309) 0:02:00.815 ****** 2025-09-19 17:01:30.287040 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287048 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287056 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287064 | orchestrator | 2025-09-19 17:01:30.287072 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-19 17:01:30.287080 | orchestrator | Friday 19 September 2025 16:57:15 +0000 (0:00:02.153) 0:02:02.968 ****** 2025-09-19 17:01:30.287088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287096 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287103 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287109 | orchestrator | 2025-09-19 17:01:30.287116 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-19 17:01:30.287129 | orchestrator | Friday 19 September 2025 16:57:15 +0000 (0:00:00.517) 0:02:03.486 ****** 2025-09-19 17:01:30.287136 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.287143 | orchestrator | 2025-09-19 17:01:30.287149 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-19 17:01:30.287156 | orchestrator | Friday 19 September 2025 16:57:16 +0000 (0:00:00.810) 0:02:04.297 ****** 2025-09-19 17:01:30.287172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:01:30.287185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:01:30.287214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287226 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:01:30.287239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287246 | orchestrator | 2025-09-19 17:01:30.287253 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-19 17:01:30.287260 | orchestrator | Friday 19 September 2025 16:57:20 +0000 (0:00:04.179) 0:02:08.476 ****** 2025-09-19 17:01:30.287275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:01:30.287288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287296 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:01:30.287326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287334 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:01:30.287361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.287368 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287375 | orchestrator | 2025-09-19 17:01:30.287382 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-19 17:01:30.287389 | orchestrator | Friday 19 September 2025 16:57:23 +0000 (0:00:03.259) 0:02:11.736 ****** 2025-09-19 17:01:30.287396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287410 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287440 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-19 17:01:30.287465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287472 | orchestrator | 2025-09-19 17:01:30.287479 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-19 17:01:30.287486 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:03.335) 0:02:15.072 ****** 2025-09-19 17:01:30.287492 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287499 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287506 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287512 | orchestrator | 2025-09-19 17:01:30.287519 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-19 17:01:30.287526 | orchestrator | Friday 19 September 2025 16:57:28 +0000 (0:00:01.419) 0:02:16.491 ****** 2025-09-19 17:01:30.287532 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287539 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287546 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287552 | orchestrator | 2025-09-19 17:01:30.287559 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-19 17:01:30.287566 | orchestrator | Friday 19 September 2025 16:57:30 +0000 (0:00:02.096) 0:02:18.588 ****** 2025-09-19 17:01:30.287572 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287579 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287586 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287592 | orchestrator | 2025-09-19 17:01:30.287599 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-19 17:01:30.287605 | orchestrator | Friday 19 September 2025 16:57:31 +0000 (0:00:00.522) 0:02:19.110 ****** 2025-09-19 17:01:30.287612 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.287619 | orchestrator | 2025-09-19 17:01:30.287625 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-19 17:01:30.287632 | orchestrator | Friday 19 September 2025 16:57:32 +0000 (0:00:00.832) 0:02:19.942 ****** 2025-09-19 17:01:30.287639 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:01:30.287654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:01:30.287661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:01:30.287668 | orchestrator | 2025-09-19 17:01:30.287675 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-19 17:01:30.287681 | orchestrator | Friday 19 September 2025 16:57:35 +0000 (0:00:03.371) 0:02:23.314 ****** 2025-09-19 17:01:30.287693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:01:30.287700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:01:30.287707 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287714 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:01:30.287732 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287739 | orchestrator | 2025-09-19 17:01:30.287746 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-19 17:01:30.287753 | orchestrator | Friday 19 September 2025 16:57:36 +0000 (0:00:00.748) 0:02:24.063 ****** 2025-09-19 17:01:30.287760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287797 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-19 17:01:30.287818 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287825 | orchestrator | 2025-09-19 17:01:30.287832 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-19 17:01:30.287838 | orchestrator | Friday 19 September 2025 16:57:36 +0000 (0:00:00.825) 0:02:24.888 ****** 2025-09-19 17:01:30.287857 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287864 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287870 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287877 | orchestrator | 2025-09-19 17:01:30.287884 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-19 17:01:30.287890 | orchestrator | Friday 19 September 2025 16:57:38 +0000 (0:00:01.346) 0:02:26.234 ****** 2025-09-19 17:01:30.287897 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.287904 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.287910 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.287917 | orchestrator | 2025-09-19 17:01:30.287924 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-19 17:01:30.287931 | orchestrator | Friday 19 September 2025 16:57:40 +0000 (0:00:02.000) 0:02:28.235 ****** 2025-09-19 17:01:30.287938 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.287944 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.287955 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.287962 | orchestrator | 2025-09-19 17:01:30.287969 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-19 17:01:30.287976 | orchestrator | Friday 19 September 2025 16:57:40 +0000 (0:00:00.525) 0:02:28.761 ****** 2025-09-19 17:01:30.287982 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.287989 | orchestrator | 2025-09-19 17:01:30.287996 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-19 17:01:30.288007 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:01.100) 0:02:29.862 ****** 2025-09-19 17:01:30.288015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:01:30.288160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:01:30.288183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:01:30.288191 | orchestrator | 2025-09-19 17:01:30.288198 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-19 17:01:30.288205 | orchestrator | Friday 19 September 2025 16:57:45 +0000 (0:00:03.554) 0:02:33.416 ****** 2025-09-19 17:01:30.288217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:01:30.288230 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:01:30.288251 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:01:30.288276 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.288283 | orchestrator | 2025-09-19 17:01:30.288289 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-19 17:01:30.288296 | orchestrator | Friday 19 September 2025 16:57:46 +0000 (0:00:01.118) 0:02:34.535 ****** 2025-09-19 17:01:30.288303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 17:01:30.288343 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 17:01:30.288393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-19 17:01:30.288421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-19 17:01:30.288427 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-19 17:01:30.288434 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.288441 | orchestrator | 2025-09-19 17:01:30.288447 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-19 17:01:30.288454 | orchestrator | Friday 19 September 2025 16:57:47 +0000 (0:00:00.953) 0:02:35.488 ****** 2025-09-19 17:01:30.288461 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.288468 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.288475 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.288481 | orchestrator | 2025-09-19 17:01:30.288488 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-19 17:01:30.288495 | orchestrator | Friday 19 September 2025 16:57:48 +0000 (0:00:01.229) 0:02:36.718 ****** 2025-09-19 17:01:30.288501 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.288508 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.288515 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.288522 | orchestrator | 2025-09-19 17:01:30.288531 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-19 17:01:30.288538 | orchestrator | Friday 19 September 2025 16:57:50 +0000 (0:00:02.176) 0:02:38.894 ****** 2025-09-19 17:01:30.288545 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288552 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288559 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.288569 | orchestrator | 2025-09-19 17:01:30.288576 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-19 17:01:30.288583 | orchestrator | Friday 19 September 2025 16:57:51 +0000 (0:00:00.313) 0:02:39.207 ****** 2025-09-19 17:01:30.288590 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288596 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.288610 | orchestrator | 2025-09-19 17:01:30.288616 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-19 17:01:30.288623 | orchestrator | Friday 19 September 2025 16:57:51 +0000 (0:00:00.521) 0:02:39.729 ****** 2025-09-19 17:01:30.288630 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.288637 | orchestrator | 2025-09-19 17:01:30.288643 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-19 17:01:30.288650 | orchestrator | Friday 19 September 2025 16:57:52 +0000 (0:00:00.961) 0:02:40.690 ****** 2025-09-19 17:01:30.288661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:01:30.288669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:01:30.288700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:01:30.288726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288740 | orchestrator | 2025-09-19 17:01:30.288746 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-19 17:01:30.288757 | orchestrator | Friday 19 September 2025 16:57:56 +0000 (0:00:03.831) 0:02:44.521 ****** 2025-09-19 17:01:30.288769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:01:30.288778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288799 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:01:30.288816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288840 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:01:30.288876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:01:30.288885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:01:30.288893 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.288900 | orchestrator | 2025-09-19 17:01:30.288907 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-19 17:01:30.288915 | orchestrator | Friday 19 September 2025 16:57:57 +0000 (0:00:00.869) 0:02:45.391 ****** 2025-09-19 17:01:30.288923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288939 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.288951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288968 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.288976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-19 17:01:30.288995 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289002 | orchestrator | 2025-09-19 17:01:30.289010 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-19 17:01:30.289017 | orchestrator | Friday 19 September 2025 16:57:58 +0000 (0:00:00.903) 0:02:46.294 ****** 2025-09-19 17:01:30.289025 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.289033 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.289041 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.289049 | orchestrator | 2025-09-19 17:01:30.289057 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-19 17:01:30.289064 | orchestrator | Friday 19 September 2025 16:57:59 +0000 (0:00:01.334) 0:02:47.629 ****** 2025-09-19 17:01:30.289071 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.289077 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.289084 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.289091 | orchestrator | 2025-09-19 17:01:30.289097 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-19 17:01:30.289104 | orchestrator | Friday 19 September 2025 16:58:01 +0000 (0:00:01.985) 0:02:49.615 ****** 2025-09-19 17:01:30.289111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.289118 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.289124 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289131 | orchestrator | 2025-09-19 17:01:30.289138 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-19 17:01:30.289144 | orchestrator | Friday 19 September 2025 16:58:02 +0000 (0:00:00.586) 0:02:50.201 ****** 2025-09-19 17:01:30.289151 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.289158 | orchestrator | 2025-09-19 17:01:30.289164 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-19 17:01:30.289171 | orchestrator | Friday 19 September 2025 16:58:03 +0000 (0:00:00.991) 0:02:51.192 ****** 2025-09-19 17:01:30.289182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:01:30.289194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:01:30.289201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:01:30.289239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289252 | orchestrator | 2025-09-19 17:01:30.289259 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-19 17:01:30.289266 | orchestrator | Friday 19 September 2025 16:58:07 +0000 (0:00:04.188) 0:02:55.381 ****** 2025-09-19 17:01:30.289273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:01:30.289280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289287 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.289298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:01:30.289309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289316 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.289323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:01:30.289334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289341 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289348 | orchestrator | 2025-09-19 17:01:30.289354 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-19 17:01:30.289361 | orchestrator | Friday 19 September 2025 16:58:08 +0000 (0:00:00.939) 0:02:56.321 ****** 2025-09-19 17:01:30.289368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.289390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289407 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.289413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-19 17:01:30.289427 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289434 | orchestrator | 2025-09-19 17:01:30.289441 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-19 17:01:30.289448 | orchestrator | Friday 19 September 2025 16:58:09 +0000 (0:00:01.091) 0:02:57.412 ****** 2025-09-19 17:01:30.289455 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.289461 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.289468 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.289475 | orchestrator | 2025-09-19 17:01:30.289482 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-19 17:01:30.289489 | orchestrator | Friday 19 September 2025 16:58:10 +0000 (0:00:01.344) 0:02:58.757 ****** 2025-09-19 17:01:30.289500 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.289507 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.289513 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.289520 | orchestrator | 2025-09-19 17:01:30.289527 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-19 17:01:30.289533 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:02.092) 0:03:00.850 ****** 2025-09-19 17:01:30.289543 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.289550 | orchestrator | 2025-09-19 17:01:30.289557 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-19 17:01:30.289564 | orchestrator | Friday 19 September 2025 16:58:14 +0000 (0:00:01.234) 0:03:02.084 ****** 2025-09-19 17:01:30.289571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 17:01:30.289578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 17:01:30.289600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-19 17:01:30.289661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289690 | orchestrator | 2025-09-19 17:01:30.289696 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-19 17:01:30.289703 | orchestrator | Friday 19 September 2025 16:58:17 +0000 (0:00:03.520) 0:03:05.605 ****** 2025-09-19 17:01:30.289710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 17:01:30.289717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289746 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.289753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 17:01:30.289764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-19 17:01:30.289785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289806 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.289813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.289831 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289837 | orchestrator | 2025-09-19 17:01:30.289856 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-19 17:01:30.289864 | orchestrator | Friday 19 September 2025 16:58:18 +0000 (0:00:00.658) 0:03:06.264 ****** 2025-09-19 17:01:30.289870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.289891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289904 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.289911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-19 17:01:30.289925 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.289932 | orchestrator | 2025-09-19 17:01:30.289939 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-19 17:01:30.289945 | orchestrator | Friday 19 September 2025 16:58:19 +0000 (0:00:01.534) 0:03:07.799 ****** 2025-09-19 17:01:30.289952 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.289959 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.289965 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.289972 | orchestrator | 2025-09-19 17:01:30.289979 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-19 17:01:30.289990 | orchestrator | Friday 19 September 2025 16:58:21 +0000 (0:00:01.402) 0:03:09.202 ****** 2025-09-19 17:01:30.289997 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.290003 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.290010 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.290046 | orchestrator | 2025-09-19 17:01:30.290055 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-19 17:01:30.290062 | orchestrator | Friday 19 September 2025 16:58:23 +0000 (0:00:02.131) 0:03:11.333 ****** 2025-09-19 17:01:30.290069 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.290076 | orchestrator | 2025-09-19 17:01:30.290082 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-19 17:01:30.290089 | orchestrator | Friday 19 September 2025 16:58:24 +0000 (0:00:01.274) 0:03:12.608 ****** 2025-09-19 17:01:30.290102 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 17:01:30.290109 | orchestrator | 2025-09-19 17:01:30.290115 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-19 17:01:30.290122 | orchestrator | Friday 19 September 2025 16:58:27 +0000 (0:00:02.998) 0:03:15.606 ****** 2025-09-19 17:01:30.290135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290150 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290180 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290218 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290225 | orchestrator | 2025-09-19 17:01:30.290232 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-19 17:01:30.290238 | orchestrator | Friday 19 September 2025 16:58:29 +0000 (0:00:02.144) 0:03:17.750 ****** 2025-09-19 17:01:30.290249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290268 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290298 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:01:30.290318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-19 17:01:30.290330 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290337 | orchestrator | 2025-09-19 17:01:30.290344 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-19 17:01:30.290350 | orchestrator | Friday 19 September 2025 16:58:32 +0000 (0:00:02.308) 0:03:20.059 ****** 2025-09-19 17:01:30.290357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290375 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290396 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-19 17:01:30.290425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290432 | orchestrator | 2025-09-19 17:01:30.290439 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-19 17:01:30.290445 | orchestrator | Friday 19 September 2025 16:58:35 +0000 (0:00:02.876) 0:03:22.936 ****** 2025-09-19 17:01:30.290452 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.290459 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.290466 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.290473 | orchestrator | 2025-09-19 17:01:30.290479 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-19 17:01:30.290486 | orchestrator | Friday 19 September 2025 16:58:36 +0000 (0:00:01.947) 0:03:24.883 ****** 2025-09-19 17:01:30.290493 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290500 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290506 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290513 | orchestrator | 2025-09-19 17:01:30.290520 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-19 17:01:30.290526 | orchestrator | Friday 19 September 2025 16:58:38 +0000 (0:00:01.439) 0:03:26.323 ****** 2025-09-19 17:01:30.290533 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290540 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290546 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290553 | orchestrator | 2025-09-19 17:01:30.290560 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-19 17:01:30.290567 | orchestrator | Friday 19 September 2025 16:58:38 +0000 (0:00:00.314) 0:03:26.638 ****** 2025-09-19 17:01:30.290590 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.290598 | orchestrator | 2025-09-19 17:01:30.290604 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-19 17:01:30.290611 | orchestrator | Friday 19 September 2025 16:58:40 +0000 (0:00:01.350) 0:03:27.988 ****** 2025-09-19 17:01:30.290621 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 17:01:30.290629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 17:01:30.290646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-19 17:01:30.290653 | orchestrator | 2025-09-19 17:01:30.290660 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-19 17:01:30.290666 | orchestrator | Friday 19 September 2025 16:58:41 +0000 (0:00:01.600) 0:03:29.589 ****** 2025-09-19 17:01:30.290673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 17:01:30.290680 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 17:01:30.290694 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-19 17:01:30.290712 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290718 | orchestrator | 2025-09-19 17:01:30.290725 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-19 17:01:30.290732 | orchestrator | Friday 19 September 2025 16:58:42 +0000 (0:00:00.397) 0:03:29.986 ****** 2025-09-19 17:01:30.290739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 17:01:30.290751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 17:01:30.290758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290765 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-19 17:01:30.290783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290789 | orchestrator | 2025-09-19 17:01:30.290796 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-19 17:01:30.290803 | orchestrator | Friday 19 September 2025 16:58:42 +0000 (0:00:00.819) 0:03:30.805 ****** 2025-09-19 17:01:30.290809 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290816 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290822 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290829 | orchestrator | 2025-09-19 17:01:30.290836 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-19 17:01:30.290842 | orchestrator | Friday 19 September 2025 16:58:43 +0000 (0:00:00.460) 0:03:31.265 ****** 2025-09-19 17:01:30.290863 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290870 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290877 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290883 | orchestrator | 2025-09-19 17:01:30.290890 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-19 17:01:30.290897 | orchestrator | Friday 19 September 2025 16:58:44 +0000 (0:00:01.250) 0:03:32.515 ****** 2025-09-19 17:01:30.290903 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.290910 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.290916 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.290923 | orchestrator | 2025-09-19 17:01:30.290930 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-19 17:01:30.290936 | orchestrator | Friday 19 September 2025 16:58:44 +0000 (0:00:00.305) 0:03:32.821 ****** 2025-09-19 17:01:30.290943 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.290950 | orchestrator | 2025-09-19 17:01:30.290956 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-19 17:01:30.290963 | orchestrator | Friday 19 September 2025 16:58:46 +0000 (0:00:01.467) 0:03:34.289 ****** 2025-09-19 17:01:30.290970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:01:30.290980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.290995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:01:30.291015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.291206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.291257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:01:30.291278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.291400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291407 | orchestrator | 2025-09-19 17:01:30.291414 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-19 17:01:30.291427 | orchestrator | Friday 19 September 2025 16:58:50 +0000 (0:00:04.119) 0:03:38.409 ****** 2025-09-19 17:01:30.291434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:01:30.291444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:01:30.291506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:01:30.291739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.291757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291792 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.291799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-19 17:01:30.291834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.291923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291954 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.291961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.291968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.291975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-19 17:01:30.291989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-19 17:01:30.292015 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:01:30.292022 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.292029 | orchestrator | 2025-09-19 17:01:30.292036 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-19 17:01:30.292043 | orchestrator | Friday 19 September 2025 16:58:51 +0000 (0:00:01.457) 0:03:39.866 ****** 2025-09-19 17:01:30.292050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292126 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.292133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.292154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-19 17:01:30.292171 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.292177 | orchestrator | 2025-09-19 17:01:30.292184 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-19 17:01:30.292191 | orchestrator | Friday 19 September 2025 16:58:54 +0000 (0:00:02.090) 0:03:41.956 ****** 2025-09-19 17:01:30.292198 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.292205 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.292211 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.292218 | orchestrator | 2025-09-19 17:01:30.292224 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-19 17:01:30.292231 | orchestrator | Friday 19 September 2025 16:58:55 +0000 (0:00:01.279) 0:03:43.235 ****** 2025-09-19 17:01:30.292249 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.292255 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.292262 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.292268 | orchestrator | 2025-09-19 17:01:30.292275 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-19 17:01:30.292283 | orchestrator | Friday 19 September 2025 16:58:57 +0000 (0:00:02.247) 0:03:45.482 ****** 2025-09-19 17:01:30.292291 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.292298 | orchestrator | 2025-09-19 17:01:30.292306 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-19 17:01:30.292313 | orchestrator | Friday 19 September 2025 16:58:58 +0000 (0:00:01.177) 0:03:46.660 ****** 2025-09-19 17:01:30.292327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292352 | orchestrator | 2025-09-19 17:01:30.292360 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-19 17:01:30.292367 | orchestrator | Friday 19 September 2025 16:59:02 +0000 (0:00:03.753) 0:03:50.414 ****** 2025-09-19 17:01:30.292379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292393 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.292405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292413 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.292421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292428 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.292435 | orchestrator | 2025-09-19 17:01:30.292442 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-19 17:01:30.292449 | orchestrator | Friday 19 September 2025 16:59:03 +0000 (0:00:00.519) 0:03:50.933 ****** 2025-09-19 17:01:30.292457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292472 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.292479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292498 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.292505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292524 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.292531 | orchestrator | 2025-09-19 17:01:30.292539 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-19 17:01:30.292546 | orchestrator | Friday 19 September 2025 16:59:03 +0000 (0:00:00.768) 0:03:51.702 ****** 2025-09-19 17:01:30.292554 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.292560 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.292567 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.292574 | orchestrator | 2025-09-19 17:01:30.292582 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-19 17:01:30.292589 | orchestrator | Friday 19 September 2025 16:59:05 +0000 (0:00:01.305) 0:03:53.007 ****** 2025-09-19 17:01:30.292596 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.292603 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.292610 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.292617 | orchestrator | 2025-09-19 17:01:30.292624 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-19 17:01:30.292631 | orchestrator | Friday 19 September 2025 16:59:07 +0000 (0:00:02.291) 0:03:55.299 ****** 2025-09-19 17:01:30.292638 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.292644 | orchestrator | 2025-09-19 17:01:30.292650 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-19 17:01:30.292656 | orchestrator | Friday 19 September 2025 16:59:08 +0000 (0:00:01.555) 0:03:56.854 ****** 2025-09-19 17:01:30.292667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.292732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292749 | orchestrator | 2025-09-19 17:01:30.292755 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-19 17:01:30.292762 | orchestrator | Friday 19 September 2025 16:59:13 +0000 (0:00:04.462) 0:04:01.316 ****** 2025-09-19 17:01:30.292771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292813 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.292819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292826 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.292836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.292854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.292872 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.292879 | orchestrator | 2025-09-19 17:01:30.292885 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-19 17:01:30.292891 | orchestrator | Friday 19 September 2025 16:59:14 +0000 (0:00:00.979) 0:04:02.295 ****** 2025-09-19 17:01:30.292898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292929 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.292936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292964 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.292971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.292993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-19 17:01:30.293000 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293006 | orchestrator | 2025-09-19 17:01:30.293012 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-19 17:01:30.293019 | orchestrator | Friday 19 September 2025 16:59:15 +0000 (0:00:01.235) 0:04:03.531 ****** 2025-09-19 17:01:30.293025 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.293031 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.293037 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.293043 | orchestrator | 2025-09-19 17:01:30.293049 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-19 17:01:30.293056 | orchestrator | Friday 19 September 2025 16:59:17 +0000 (0:00:01.417) 0:04:04.948 ****** 2025-09-19 17:01:30.293062 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.293068 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.293074 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.293080 | orchestrator | 2025-09-19 17:01:30.293086 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-19 17:01:30.293093 | orchestrator | Friday 19 September 2025 16:59:19 +0000 (0:00:02.176) 0:04:07.125 ****** 2025-09-19 17:01:30.293099 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.293105 | orchestrator | 2025-09-19 17:01:30.293111 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-19 17:01:30.293117 | orchestrator | Friday 19 September 2025 16:59:20 +0000 (0:00:01.591) 0:04:08.717 ****** 2025-09-19 17:01:30.293124 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-19 17:01:30.293130 | orchestrator | 2025-09-19 17:01:30.293136 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-19 17:01:30.293143 | orchestrator | Friday 19 September 2025 16:59:21 +0000 (0:00:00.829) 0:04:09.547 ****** 2025-09-19 17:01:30.293152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 17:01:30.293159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 17:01:30.293166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-19 17:01:30.293176 | orchestrator | 2025-09-19 17:01:30.293182 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-19 17:01:30.293189 | orchestrator | Friday 19 September 2025 16:59:25 +0000 (0:00:04.278) 0:04:13.825 ****** 2025-09-19 17:01:30.293199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293205 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293225 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293231 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293237 | orchestrator | 2025-09-19 17:01:30.293243 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-19 17:01:30.293249 | orchestrator | Friday 19 September 2025 16:59:27 +0000 (0:00:01.373) 0:04:15.199 ****** 2025-09-19 17:01:30.293255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293269 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293291 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-19 17:01:30.293314 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293320 | orchestrator | 2025-09-19 17:01:30.293326 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 17:01:30.293333 | orchestrator | Friday 19 September 2025 16:59:28 +0000 (0:00:01.530) 0:04:16.730 ****** 2025-09-19 17:01:30.293339 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.293345 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.293351 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.293357 | orchestrator | 2025-09-19 17:01:30.293364 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 17:01:30.293370 | orchestrator | Friday 19 September 2025 16:59:31 +0000 (0:00:02.656) 0:04:19.387 ****** 2025-09-19 17:01:30.293376 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.293382 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.293388 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.293395 | orchestrator | 2025-09-19 17:01:30.293401 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-19 17:01:30.293407 | orchestrator | Friday 19 September 2025 16:59:34 +0000 (0:00:03.004) 0:04:22.391 ****** 2025-09-19 17:01:30.293416 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-19 17:01:30.293422 | orchestrator | 2025-09-19 17:01:30.293429 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-19 17:01:30.293435 | orchestrator | Friday 19 September 2025 16:59:35 +0000 (0:00:01.334) 0:04:23.726 ****** 2025-09-19 17:01:30.293441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293448 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293461 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293474 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293480 | orchestrator | 2025-09-19 17:01:30.293486 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-19 17:01:30.293492 | orchestrator | Friday 19 September 2025 16:59:37 +0000 (0:00:01.255) 0:04:24.981 ****** 2025-09-19 17:01:30.293501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293520 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293540 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-19 17:01:30.293560 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293566 | orchestrator | 2025-09-19 17:01:30.293573 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-19 17:01:30.293579 | orchestrator | Friday 19 September 2025 16:59:38 +0000 (0:00:01.357) 0:04:26.339 ****** 2025-09-19 17:01:30.293585 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293591 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293597 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293603 | orchestrator | 2025-09-19 17:01:30.293613 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 17:01:30.293619 | orchestrator | Friday 19 September 2025 16:59:40 +0000 (0:00:01.888) 0:04:28.227 ****** 2025-09-19 17:01:30.293625 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.293632 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.293638 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.293644 | orchestrator | 2025-09-19 17:01:30.293650 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 17:01:30.293656 | orchestrator | Friday 19 September 2025 16:59:42 +0000 (0:00:02.387) 0:04:30.615 ****** 2025-09-19 17:01:30.293663 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.293669 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.293675 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.293681 | orchestrator | 2025-09-19 17:01:30.293687 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-19 17:01:30.293693 | orchestrator | Friday 19 September 2025 16:59:45 +0000 (0:00:02.928) 0:04:33.543 ****** 2025-09-19 17:01:30.293699 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-19 17:01:30.293706 | orchestrator | 2025-09-19 17:01:30.293712 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-19 17:01:30.293718 | orchestrator | Friday 19 September 2025 16:59:46 +0000 (0:00:00.852) 0:04:34.395 ****** 2025-09-19 17:01:30.293724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293748 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293765 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293771 | orchestrator | 2025-09-19 17:01:30.293777 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-19 17:01:30.293783 | orchestrator | Friday 19 September 2025 16:59:47 +0000 (0:00:01.324) 0:04:35.720 ****** 2025-09-19 17:01:30.293790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293809 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-19 17:01:30.293962 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.293968 | orchestrator | 2025-09-19 17:01:30.293974 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-19 17:01:30.293980 | orchestrator | Friday 19 September 2025 16:59:49 +0000 (0:00:01.312) 0:04:37.032 ****** 2025-09-19 17:01:30.293986 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.293992 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.293998 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.294010 | orchestrator | 2025-09-19 17:01:30.294039 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-19 17:01:30.294047 | orchestrator | Friday 19 September 2025 16:59:50 +0000 (0:00:01.524) 0:04:38.557 ****** 2025-09-19 17:01:30.294053 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.294060 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.294066 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.294073 | orchestrator | 2025-09-19 17:01:30.294079 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-19 17:01:30.294086 | orchestrator | Friday 19 September 2025 16:59:53 +0000 (0:00:02.375) 0:04:40.932 ****** 2025-09-19 17:01:30.294092 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.294099 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.294106 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.294112 | orchestrator | 2025-09-19 17:01:30.294119 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-19 17:01:30.294125 | orchestrator | Friday 19 September 2025 16:59:56 +0000 (0:00:03.177) 0:04:44.109 ****** 2025-09-19 17:01:30.294132 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.294139 | orchestrator | 2025-09-19 17:01:30.294145 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-19 17:01:30.294152 | orchestrator | Friday 19 September 2025 16:59:57 +0000 (0:00:01.527) 0:04:45.636 ****** 2025-09-19 17:01:30.294162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.294170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.294231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.294271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294323 | orchestrator | 2025-09-19 17:01:30.294330 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-19 17:01:30.294336 | orchestrator | Friday 19 September 2025 17:00:01 +0000 (0:00:03.280) 0:04:48.917 ****** 2025-09-19 17:01:30.294359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.294372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294402 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.294427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.294457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294480 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.294489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.294495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-19 17:01:30.294501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-19 17:01:30.294532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:01:30.294538 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.294543 | orchestrator | 2025-09-19 17:01:30.294549 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-19 17:01:30.294555 | orchestrator | Friday 19 September 2025 17:00:01 +0000 (0:00:00.714) 0:04:49.632 ****** 2025-09-19 17:01:30.294562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294575 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.294582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294595 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.294601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-19 17:01:30.294617 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.294623 | orchestrator | 2025-09-19 17:01:30.294629 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-19 17:01:30.294635 | orchestrator | Friday 19 September 2025 17:00:03 +0000 (0:00:01.552) 0:04:51.184 ****** 2025-09-19 17:01:30.294646 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.294653 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.294660 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.294666 | orchestrator | 2025-09-19 17:01:30.294672 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-19 17:01:30.294678 | orchestrator | Friday 19 September 2025 17:00:04 +0000 (0:00:01.461) 0:04:52.646 ****** 2025-09-19 17:01:30.294684 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.294690 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.294697 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.294703 | orchestrator | 2025-09-19 17:01:30.294710 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-19 17:01:30.294716 | orchestrator | Friday 19 September 2025 17:00:06 +0000 (0:00:02.180) 0:04:54.827 ****** 2025-09-19 17:01:30.294722 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.294728 | orchestrator | 2025-09-19 17:01:30.294734 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-19 17:01:30.294741 | orchestrator | Friday 19 September 2025 17:00:08 +0000 (0:00:01.354) 0:04:56.181 ****** 2025-09-19 17:01:30.294764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:01:30.294772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:01:30.294778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:01:30.294787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:01:30.294814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:01:30.294821 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:01:30.294827 | orchestrator | 2025-09-19 17:01:30.294833 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-19 17:01:30.294839 | orchestrator | Friday 19 September 2025 17:00:13 +0000 (0:00:05.475) 0:05:01.657 ****** 2025-09-19 17:01:30.294860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:01:30.294873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:01:30.294879 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.294885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:01:30.294908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:01:30.294915 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.294921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:01:30.294929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:01:30.294939 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.294945 | orchestrator | 2025-09-19 17:01:30.294950 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-19 17:01:30.294956 | orchestrator | Friday 19 September 2025 17:00:14 +0000 (0:00:00.654) 0:05:02.311 ****** 2025-09-19 17:01:30.294961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 17:01:30.294967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.294973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.294979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.294984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 17:01:30.295004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.295011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.295016 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-19 17:01:30.295027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.295033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-19 17:01:30.295038 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295043 | orchestrator | 2025-09-19 17:01:30.295049 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-19 17:01:30.295054 | orchestrator | Friday 19 September 2025 17:00:15 +0000 (0:00:00.916) 0:05:03.228 ****** 2025-09-19 17:01:30.295064 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295069 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295074 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295080 | orchestrator | 2025-09-19 17:01:30.295085 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-19 17:01:30.295090 | orchestrator | Friday 19 September 2025 17:00:16 +0000 (0:00:00.820) 0:05:04.049 ****** 2025-09-19 17:01:30.295096 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295101 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295106 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295112 | orchestrator | 2025-09-19 17:01:30.295117 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-19 17:01:30.295122 | orchestrator | Friday 19 September 2025 17:00:17 +0000 (0:00:01.328) 0:05:05.377 ****** 2025-09-19 17:01:30.295128 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.295133 | orchestrator | 2025-09-19 17:01:30.295138 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-19 17:01:30.295144 | orchestrator | Friday 19 September 2025 17:00:18 +0000 (0:00:01.389) 0:05:06.767 ****** 2025-09-19 17:01:30.295154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:01:30.295160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:01:30.295209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:01:30.295257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:01:30.295297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:01:30.295333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:01:30.295371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295401 | orchestrator | 2025-09-19 17:01:30.295406 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-19 17:01:30.295412 | orchestrator | Friday 19 September 2025 17:00:23 +0000 (0:00:04.430) 0:05:11.198 ****** 2025-09-19 17:01:30.295418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 17:01:30.295424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 17:01:30.295464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295487 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 17:01:30.295501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 17:01:30.295539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 17:01:30.295571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:01:30.295583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 17:01:30.295618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-19 17:01:30.295624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:01:30.295668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:01:30.295682 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295688 | orchestrator | 2025-09-19 17:01:30.295694 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-19 17:01:30.295699 | orchestrator | Friday 19 September 2025 17:00:24 +0000 (0:00:01.187) 0:05:12.385 ****** 2025-09-19 17:01:30.295705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295737 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295769 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-19 17:01:30.295786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-19 17:01:30.295797 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295803 | orchestrator | 2025-09-19 17:01:30.295808 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-19 17:01:30.295813 | orchestrator | Friday 19 September 2025 17:00:25 +0000 (0:00:00.996) 0:05:13.382 ****** 2025-09-19 17:01:30.295819 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295824 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295830 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295835 | orchestrator | 2025-09-19 17:01:30.295840 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-19 17:01:30.295863 | orchestrator | Friday 19 September 2025 17:00:25 +0000 (0:00:00.457) 0:05:13.839 ****** 2025-09-19 17:01:30.295869 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295874 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.295880 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.295885 | orchestrator | 2025-09-19 17:01:30.295890 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-19 17:01:30.295904 | orchestrator | Friday 19 September 2025 17:00:27 +0000 (0:00:01.436) 0:05:15.276 ****** 2025-09-19 17:01:30.295910 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.295915 | orchestrator | 2025-09-19 17:01:30.295921 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-19 17:01:30.295926 | orchestrator | Friday 19 September 2025 17:00:29 +0000 (0:00:01.721) 0:05:16.997 ****** 2025-09-19 17:01:30.295932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 17:01:30.295942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 17:01:30.295948 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-19 17:01:30.295954 | orchestrator | 2025-09-19 17:01:30.295960 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-19 17:01:30.295969 | orchestrator | Friday 19 September 2025 17:00:31 +0000 (0:00:02.400) 0:05:19.397 ****** 2025-09-19 17:01:30.295977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 17:01:30.295984 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.295989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 17:01:30.295995 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-19 17:01:30.296010 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296015 | orchestrator | 2025-09-19 17:01:30.296021 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-19 17:01:30.296026 | orchestrator | Friday 19 September 2025 17:00:31 +0000 (0:00:00.396) 0:05:19.793 ****** 2025-09-19 17:01:30.296032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 17:01:30.296037 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 17:01:30.296048 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-19 17:01:30.296062 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296068 | orchestrator | 2025-09-19 17:01:30.296073 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-19 17:01:30.296079 | orchestrator | Friday 19 September 2025 17:00:32 +0000 (0:00:01.041) 0:05:20.835 ****** 2025-09-19 17:01:30.296084 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296090 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296095 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296100 | orchestrator | 2025-09-19 17:01:30.296106 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-19 17:01:30.296111 | orchestrator | Friday 19 September 2025 17:00:33 +0000 (0:00:00.483) 0:05:21.319 ****** 2025-09-19 17:01:30.296117 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296122 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296128 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296133 | orchestrator | 2025-09-19 17:01:30.296139 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-19 17:01:30.296144 | orchestrator | Friday 19 September 2025 17:00:34 +0000 (0:00:01.379) 0:05:22.699 ****** 2025-09-19 17:01:30.296149 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:01:30.296155 | orchestrator | 2025-09-19 17:01:30.296160 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-19 17:01:30.296166 | orchestrator | Friday 19 September 2025 17:00:36 +0000 (0:00:01.779) 0:05:24.478 ****** 2025-09-19 17:01:30.296174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-19 17:01:30.296220 | orchestrator | 2025-09-19 17:01:30.296228 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-19 17:01:30.296233 | orchestrator | Friday 19 September 2025 17:00:43 +0000 (0:00:06.577) 0:05:31.056 ****** 2025-09-19 17:01:30.296239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296254 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-19 17:01:30.296297 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296303 | orchestrator | 2025-09-19 17:01:30.296308 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-19 17:01:30.296314 | orchestrator | Friday 19 September 2025 17:00:43 +0000 (0:00:00.600) 0:05:31.656 ****** 2025-09-19 17:01:30.296319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296347 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296374 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-19 17:01:30.296408 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296414 | orchestrator | 2025-09-19 17:01:30.296419 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-19 17:01:30.296424 | orchestrator | Friday 19 September 2025 17:00:45 +0000 (0:00:01.368) 0:05:33.024 ****** 2025-09-19 17:01:30.296430 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.296435 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.296441 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.296446 | orchestrator | 2025-09-19 17:01:30.296451 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-19 17:01:30.296457 | orchestrator | Friday 19 September 2025 17:00:46 +0000 (0:00:01.250) 0:05:34.275 ****** 2025-09-19 17:01:30.296462 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.296468 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.296473 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.296478 | orchestrator | 2025-09-19 17:01:30.296484 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-19 17:01:30.296489 | orchestrator | Friday 19 September 2025 17:00:48 +0000 (0:00:02.061) 0:05:36.336 ****** 2025-09-19 17:01:30.296494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296500 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296505 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296511 | orchestrator | 2025-09-19 17:01:30.296516 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-19 17:01:30.296521 | orchestrator | Friday 19 September 2025 17:00:48 +0000 (0:00:00.336) 0:05:36.672 ****** 2025-09-19 17:01:30.296527 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296532 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296537 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296543 | orchestrator | 2025-09-19 17:01:30.296548 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-19 17:01:30.296554 | orchestrator | Friday 19 September 2025 17:00:49 +0000 (0:00:00.323) 0:05:36.996 ****** 2025-09-19 17:01:30.296559 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296570 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296575 | orchestrator | 2025-09-19 17:01:30.296581 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-19 17:01:30.296586 | orchestrator | Friday 19 September 2025 17:00:49 +0000 (0:00:00.635) 0:05:37.632 ****** 2025-09-19 17:01:30.296592 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296608 | orchestrator | 2025-09-19 17:01:30.296613 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-19 17:01:30.296619 | orchestrator | Friday 19 September 2025 17:00:50 +0000 (0:00:00.324) 0:05:37.957 ****** 2025-09-19 17:01:30.296624 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296635 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296641 | orchestrator | 2025-09-19 17:01:30.296646 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-19 17:01:30.296658 | orchestrator | Friday 19 September 2025 17:00:50 +0000 (0:00:00.330) 0:05:38.287 ****** 2025-09-19 17:01:30.296663 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.296669 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.296674 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.296680 | orchestrator | 2025-09-19 17:01:30.296685 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-19 17:01:30.296691 | orchestrator | Friday 19 September 2025 17:00:51 +0000 (0:00:00.823) 0:05:39.110 ****** 2025-09-19 17:01:30.296696 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296702 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296707 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296712 | orchestrator | 2025-09-19 17:01:30.296718 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-19 17:01:30.296723 | orchestrator | Friday 19 September 2025 17:00:51 +0000 (0:00:00.746) 0:05:39.857 ****** 2025-09-19 17:01:30.296729 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296734 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296740 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296745 | orchestrator | 2025-09-19 17:01:30.296750 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-19 17:01:30.296756 | orchestrator | Friday 19 September 2025 17:00:52 +0000 (0:00:00.345) 0:05:40.203 ****** 2025-09-19 17:01:30.296761 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296767 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296772 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296778 | orchestrator | 2025-09-19 17:01:30.296783 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-19 17:01:30.296789 | orchestrator | Friday 19 September 2025 17:00:53 +0000 (0:00:00.954) 0:05:41.157 ****** 2025-09-19 17:01:30.296794 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296799 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296805 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296810 | orchestrator | 2025-09-19 17:01:30.296815 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-19 17:01:30.296821 | orchestrator | Friday 19 September 2025 17:00:54 +0000 (0:00:01.223) 0:05:42.381 ****** 2025-09-19 17:01:30.296826 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296832 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296840 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296858 | orchestrator | 2025-09-19 17:01:30.296863 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-19 17:01:30.296869 | orchestrator | Friday 19 September 2025 17:00:55 +0000 (0:00:00.954) 0:05:43.336 ****** 2025-09-19 17:01:30.296874 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.296880 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.296885 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.296890 | orchestrator | 2025-09-19 17:01:30.296896 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-19 17:01:30.296901 | orchestrator | Friday 19 September 2025 17:01:00 +0000 (0:00:04.701) 0:05:48.038 ****** 2025-09-19 17:01:30.296906 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296912 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296917 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296922 | orchestrator | 2025-09-19 17:01:30.296928 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-19 17:01:30.296933 | orchestrator | Friday 19 September 2025 17:01:02 +0000 (0:00:02.815) 0:05:50.853 ****** 2025-09-19 17:01:30.296939 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.296944 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.296949 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.296955 | orchestrator | 2025-09-19 17:01:30.296960 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-19 17:01:30.296966 | orchestrator | Friday 19 September 2025 17:01:10 +0000 (0:00:07.688) 0:05:58.542 ****** 2025-09-19 17:01:30.296974 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.296980 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.296985 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.296990 | orchestrator | 2025-09-19 17:01:30.296996 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-19 17:01:30.297001 | orchestrator | Friday 19 September 2025 17:01:14 +0000 (0:00:04.223) 0:06:02.766 ****** 2025-09-19 17:01:30.297007 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:01:30.297012 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:01:30.297017 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:01:30.297023 | orchestrator | 2025-09-19 17:01:30.297028 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-19 17:01:30.297033 | orchestrator | Friday 19 September 2025 17:01:24 +0000 (0:00:09.389) 0:06:12.155 ****** 2025-09-19 17:01:30.297039 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297044 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297049 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297055 | orchestrator | 2025-09-19 17:01:30.297060 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-19 17:01:30.297066 | orchestrator | Friday 19 September 2025 17:01:24 +0000 (0:00:00.359) 0:06:12.514 ****** 2025-09-19 17:01:30.297071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297076 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297082 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297087 | orchestrator | 2025-09-19 17:01:30.297092 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-19 17:01:30.297098 | orchestrator | Friday 19 September 2025 17:01:24 +0000 (0:00:00.335) 0:06:12.850 ****** 2025-09-19 17:01:30.297103 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297114 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297119 | orchestrator | 2025-09-19 17:01:30.297125 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-19 17:01:30.297130 | orchestrator | Friday 19 September 2025 17:01:25 +0000 (0:00:00.692) 0:06:13.543 ****** 2025-09-19 17:01:30.297135 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297141 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297146 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297151 | orchestrator | 2025-09-19 17:01:30.297157 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-19 17:01:30.297162 | orchestrator | Friday 19 September 2025 17:01:26 +0000 (0:00:00.357) 0:06:13.901 ****** 2025-09-19 17:01:30.297168 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297176 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297187 | orchestrator | 2025-09-19 17:01:30.297192 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-19 17:01:30.297197 | orchestrator | Friday 19 September 2025 17:01:26 +0000 (0:00:00.356) 0:06:14.257 ****** 2025-09-19 17:01:30.297203 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:01:30.297208 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:01:30.297214 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:01:30.297219 | orchestrator | 2025-09-19 17:01:30.297224 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-19 17:01:30.297230 | orchestrator | Friday 19 September 2025 17:01:26 +0000 (0:00:00.355) 0:06:14.613 ****** 2025-09-19 17:01:30.297235 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.297240 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.297246 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.297251 | orchestrator | 2025-09-19 17:01:30.297257 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-19 17:01:30.297262 | orchestrator | Friday 19 September 2025 17:01:27 +0000 (0:00:01.272) 0:06:15.886 ****** 2025-09-19 17:01:30.297268 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:01:30.297273 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:01:30.297282 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:01:30.297287 | orchestrator | 2025-09-19 17:01:30.297293 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:01:30.297298 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 17:01:30.297304 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 17:01:30.297310 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-19 17:01:30.297315 | orchestrator | 2025-09-19 17:01:30.297320 | orchestrator | 2025-09-19 17:01:30.297328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:01:30.297334 | orchestrator | Friday 19 September 2025 17:01:28 +0000 (0:00:00.911) 0:06:16.798 ****** 2025-09-19 17:01:30.297339 | orchestrator | =============================================================================== 2025-09-19 17:01:30.297345 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.39s 2025-09-19 17:01:30.297350 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.69s 2025-09-19 17:01:30.297355 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.58s 2025-09-19 17:01:30.297361 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.77s 2025-09-19 17:01:30.297366 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 5.70s 2025-09-19 17:01:30.297372 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.48s 2025-09-19 17:01:30.297377 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.47s 2025-09-19 17:01:30.297382 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.70s 2025-09-19 17:01:30.297388 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.70s 2025-09-19 17:01:30.297393 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.48s 2025-09-19 17:01:30.297398 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.46s 2025-09-19 17:01:30.297403 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.43s 2025-09-19 17:01:30.297409 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.41s 2025-09-19 17:01:30.297414 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.28s 2025-09-19 17:01:30.297420 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.22s 2025-09-19 17:01:30.297425 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.19s 2025-09-19 17:01:30.297430 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.18s 2025-09-19 17:01:30.297435 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.12s 2025-09-19 17:01:30.297441 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.09s 2025-09-19 17:01:30.297446 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 3.90s 2025-09-19 17:01:30.297452 | orchestrator | 2025-09-19 17:01:30 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:30.297457 | orchestrator | 2025-09-19 17:01:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:33.327794 | orchestrator | 2025-09-19 17:01:33 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:33.330326 | orchestrator | 2025-09-19 17:01:33 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:33.332266 | orchestrator | 2025-09-19 17:01:33 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:33.332456 | orchestrator | 2025-09-19 17:01:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:36.382337 | orchestrator | 2025-09-19 17:01:36 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:36.384550 | orchestrator | 2025-09-19 17:01:36 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:36.386535 | orchestrator | 2025-09-19 17:01:36 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:36.386561 | orchestrator | 2025-09-19 17:01:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:39.426841 | orchestrator | 2025-09-19 17:01:39 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:39.427929 | orchestrator | 2025-09-19 17:01:39 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:39.429668 | orchestrator | 2025-09-19 17:01:39 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:39.429705 | orchestrator | 2025-09-19 17:01:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:42.470646 | orchestrator | 2025-09-19 17:01:42 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:42.471238 | orchestrator | 2025-09-19 17:01:42 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:42.471923 | orchestrator | 2025-09-19 17:01:42 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:42.471957 | orchestrator | 2025-09-19 17:01:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:45.504258 | orchestrator | 2025-09-19 17:01:45 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:45.504438 | orchestrator | 2025-09-19 17:01:45 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:45.504464 | orchestrator | 2025-09-19 17:01:45 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:45.504477 | orchestrator | 2025-09-19 17:01:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:48.535364 | orchestrator | 2025-09-19 17:01:48 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:48.536490 | orchestrator | 2025-09-19 17:01:48 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:48.537775 | orchestrator | 2025-09-19 17:01:48 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:48.537821 | orchestrator | 2025-09-19 17:01:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:51.664162 | orchestrator | 2025-09-19 17:01:51 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:51.666594 | orchestrator | 2025-09-19 17:01:51 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:51.667372 | orchestrator | 2025-09-19 17:01:51 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:51.667400 | orchestrator | 2025-09-19 17:01:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:54.710425 | orchestrator | 2025-09-19 17:01:54 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:54.711799 | orchestrator | 2025-09-19 17:01:54 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:54.711829 | orchestrator | 2025-09-19 17:01:54 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:54.711838 | orchestrator | 2025-09-19 17:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:01:57.749279 | orchestrator | 2025-09-19 17:01:57 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:01:57.749763 | orchestrator | 2025-09-19 17:01:57 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:01:57.753542 | orchestrator | 2025-09-19 17:01:57 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:01:57.753568 | orchestrator | 2025-09-19 17:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:00.791247 | orchestrator | 2025-09-19 17:02:00 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:00.792098 | orchestrator | 2025-09-19 17:02:00 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:00.793586 | orchestrator | 2025-09-19 17:02:00 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:00.793964 | orchestrator | 2025-09-19 17:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:03.822289 | orchestrator | 2025-09-19 17:02:03 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:03.825013 | orchestrator | 2025-09-19 17:02:03 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:03.826116 | orchestrator | 2025-09-19 17:02:03 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:03.826393 | orchestrator | 2025-09-19 17:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:06.861209 | orchestrator | 2025-09-19 17:02:06 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:06.861391 | orchestrator | 2025-09-19 17:02:06 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:06.862804 | orchestrator | 2025-09-19 17:02:06 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:06.862929 | orchestrator | 2025-09-19 17:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:09.906881 | orchestrator | 2025-09-19 17:02:09 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:09.908623 | orchestrator | 2025-09-19 17:02:09 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:09.910166 | orchestrator | 2025-09-19 17:02:09 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:09.910245 | orchestrator | 2025-09-19 17:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:12.951064 | orchestrator | 2025-09-19 17:02:12 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:12.953139 | orchestrator | 2025-09-19 17:02:12 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:12.954767 | orchestrator | 2025-09-19 17:02:12 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:12.954799 | orchestrator | 2025-09-19 17:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:15.989432 | orchestrator | 2025-09-19 17:02:15 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:15.990796 | orchestrator | 2025-09-19 17:02:15 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:15.991708 | orchestrator | 2025-09-19 17:02:15 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:15.992218 | orchestrator | 2025-09-19 17:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:19.032828 | orchestrator | 2025-09-19 17:02:19 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:19.035146 | orchestrator | 2025-09-19 17:02:19 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:19.036413 | orchestrator | 2025-09-19 17:02:19 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:19.036443 | orchestrator | 2025-09-19 17:02:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:22.080169 | orchestrator | 2025-09-19 17:02:22 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:22.082100 | orchestrator | 2025-09-19 17:02:22 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:22.084598 | orchestrator | 2025-09-19 17:02:22 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:22.085200 | orchestrator | 2025-09-19 17:02:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:25.130602 | orchestrator | 2025-09-19 17:02:25 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:25.133107 | orchestrator | 2025-09-19 17:02:25 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:25.135233 | orchestrator | 2025-09-19 17:02:25 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:25.135639 | orchestrator | 2025-09-19 17:02:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:28.183177 | orchestrator | 2025-09-19 17:02:28 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:28.183271 | orchestrator | 2025-09-19 17:02:28 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:28.183907 | orchestrator | 2025-09-19 17:02:28 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:28.183997 | orchestrator | 2025-09-19 17:02:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:31.232102 | orchestrator | 2025-09-19 17:02:31 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:31.233647 | orchestrator | 2025-09-19 17:02:31 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:31.235324 | orchestrator | 2025-09-19 17:02:31 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:31.235411 | orchestrator | 2025-09-19 17:02:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:34.287555 | orchestrator | 2025-09-19 17:02:34 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:34.288240 | orchestrator | 2025-09-19 17:02:34 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:34.289586 | orchestrator | 2025-09-19 17:02:34 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:34.289621 | orchestrator | 2025-09-19 17:02:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:37.333116 | orchestrator | 2025-09-19 17:02:37 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:37.335159 | orchestrator | 2025-09-19 17:02:37 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:37.337372 | orchestrator | 2025-09-19 17:02:37 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:37.337424 | orchestrator | 2025-09-19 17:02:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:40.391928 | orchestrator | 2025-09-19 17:02:40 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:40.403436 | orchestrator | 2025-09-19 17:02:40 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:40.405439 | orchestrator | 2025-09-19 17:02:40 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:40.405458 | orchestrator | 2025-09-19 17:02:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:43.451121 | orchestrator | 2025-09-19 17:02:43 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:43.452108 | orchestrator | 2025-09-19 17:02:43 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:43.453741 | orchestrator | 2025-09-19 17:02:43 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:43.454058 | orchestrator | 2025-09-19 17:02:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:46.505954 | orchestrator | 2025-09-19 17:02:46 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:46.507708 | orchestrator | 2025-09-19 17:02:46 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:46.510070 | orchestrator | 2025-09-19 17:02:46 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:46.510145 | orchestrator | 2025-09-19 17:02:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:49.562490 | orchestrator | 2025-09-19 17:02:49 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:49.565817 | orchestrator | 2025-09-19 17:02:49 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:49.567583 | orchestrator | 2025-09-19 17:02:49 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:49.567627 | orchestrator | 2025-09-19 17:02:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:52.613283 | orchestrator | 2025-09-19 17:02:52 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:52.615508 | orchestrator | 2025-09-19 17:02:52 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:52.618078 | orchestrator | 2025-09-19 17:02:52 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:52.618341 | orchestrator | 2025-09-19 17:02:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:55.652547 | orchestrator | 2025-09-19 17:02:55 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:55.652635 | orchestrator | 2025-09-19 17:02:55 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:55.653512 | orchestrator | 2025-09-19 17:02:55 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:55.653537 | orchestrator | 2025-09-19 17:02:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:02:58.682439 | orchestrator | 2025-09-19 17:02:58 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:02:58.685438 | orchestrator | 2025-09-19 17:02:58 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:02:58.687306 | orchestrator | 2025-09-19 17:02:58 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:02:58.687831 | orchestrator | 2025-09-19 17:02:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:01.733350 | orchestrator | 2025-09-19 17:03:01 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:01.734804 | orchestrator | 2025-09-19 17:03:01 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:01.737103 | orchestrator | 2025-09-19 17:03:01 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:01.737905 | orchestrator | 2025-09-19 17:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:04.779469 | orchestrator | 2025-09-19 17:03:04 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:04.782186 | orchestrator | 2025-09-19 17:03:04 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:04.785084 | orchestrator | 2025-09-19 17:03:04 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:04.785136 | orchestrator | 2025-09-19 17:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:07.826486 | orchestrator | 2025-09-19 17:03:07 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:07.828126 | orchestrator | 2025-09-19 17:03:07 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:07.829836 | orchestrator | 2025-09-19 17:03:07 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:07.829891 | orchestrator | 2025-09-19 17:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:10.866368 | orchestrator | 2025-09-19 17:03:10 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:10.866756 | orchestrator | 2025-09-19 17:03:10 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:10.869109 | orchestrator | 2025-09-19 17:03:10 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:10.869226 | orchestrator | 2025-09-19 17:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:13.918246 | orchestrator | 2025-09-19 17:03:13 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:13.919385 | orchestrator | 2025-09-19 17:03:13 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:13.921296 | orchestrator | 2025-09-19 17:03:13 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:13.921319 | orchestrator | 2025-09-19 17:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:16.966726 | orchestrator | 2025-09-19 17:03:16 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:16.969200 | orchestrator | 2025-09-19 17:03:16 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:16.971562 | orchestrator | 2025-09-19 17:03:16 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:16.971596 | orchestrator | 2025-09-19 17:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:20.019335 | orchestrator | 2025-09-19 17:03:20 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:20.020240 | orchestrator | 2025-09-19 17:03:20 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:20.022390 | orchestrator | 2025-09-19 17:03:20 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:20.022443 | orchestrator | 2025-09-19 17:03:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:23.068758 | orchestrator | 2025-09-19 17:03:23 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:23.069213 | orchestrator | 2025-09-19 17:03:23 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:23.070150 | orchestrator | 2025-09-19 17:03:23 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:23.070212 | orchestrator | 2025-09-19 17:03:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:26.116253 | orchestrator | 2025-09-19 17:03:26 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:26.118166 | orchestrator | 2025-09-19 17:03:26 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:26.120292 | orchestrator | 2025-09-19 17:03:26 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:26.120802 | orchestrator | 2025-09-19 17:03:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:29.166413 | orchestrator | 2025-09-19 17:03:29 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:29.166505 | orchestrator | 2025-09-19 17:03:29 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:29.167104 | orchestrator | 2025-09-19 17:03:29 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:29.167129 | orchestrator | 2025-09-19 17:03:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:32.216912 | orchestrator | 2025-09-19 17:03:32 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:32.217920 | orchestrator | 2025-09-19 17:03:32 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:32.220158 | orchestrator | 2025-09-19 17:03:32 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:32.220345 | orchestrator | 2025-09-19 17:03:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:35.277065 | orchestrator | 2025-09-19 17:03:35 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:35.279432 | orchestrator | 2025-09-19 17:03:35 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:35.281716 | orchestrator | 2025-09-19 17:03:35 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:35.282127 | orchestrator | 2025-09-19 17:03:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:38.321285 | orchestrator | 2025-09-19 17:03:38 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:38.322961 | orchestrator | 2025-09-19 17:03:38 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:38.325518 | orchestrator | 2025-09-19 17:03:38 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:38.326002 | orchestrator | 2025-09-19 17:03:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:41.369570 | orchestrator | 2025-09-19 17:03:41 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:41.371117 | orchestrator | 2025-09-19 17:03:41 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:41.372583 | orchestrator | 2025-09-19 17:03:41 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:41.372977 | orchestrator | 2025-09-19 17:03:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:44.417175 | orchestrator | 2025-09-19 17:03:44 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:44.418781 | orchestrator | 2025-09-19 17:03:44 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:44.420372 | orchestrator | 2025-09-19 17:03:44 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:44.420435 | orchestrator | 2025-09-19 17:03:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:47.465492 | orchestrator | 2025-09-19 17:03:47 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:47.466680 | orchestrator | 2025-09-19 17:03:47 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:47.469062 | orchestrator | 2025-09-19 17:03:47 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state STARTED 2025-09-19 17:03:47.469199 | orchestrator | 2025-09-19 17:03:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:50.516280 | orchestrator | 2025-09-19 17:03:50 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:03:50.519052 | orchestrator | 2025-09-19 17:03:50 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:50.521058 | orchestrator | 2025-09-19 17:03:50 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:50.527513 | orchestrator | 2025-09-19 17:03:50 | INFO  | Task 1de75f07-d03f-4a3f-b305-2182a19c95b5 is in state SUCCESS 2025-09-19 17:03:50.527884 | orchestrator | 2025-09-19 17:03:50.530690 | orchestrator | 2025-09-19 17:03:50.530748 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-19 17:03:50.530779 | orchestrator | 2025-09-19 17:03:50.530908 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 17:03:50.530934 | orchestrator | Friday 19 September 2025 16:52:49 +0000 (0:00:00.873) 0:00:00.873 ****** 2025-09-19 17:03:50.530953 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.530972 | orchestrator | 2025-09-19 17:03:50.530989 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 17:03:50.531007 | orchestrator | Friday 19 September 2025 16:52:50 +0000 (0:00:01.278) 0:00:02.151 ****** 2025-09-19 17:03:50.531026 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531045 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531063 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531081 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.531099 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531111 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.531121 | orchestrator | 2025-09-19 17:03:50.531132 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 17:03:50.531143 | orchestrator | Friday 19 September 2025 16:52:52 +0000 (0:00:02.045) 0:00:04.196 ****** 2025-09-19 17:03:50.531154 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531165 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531176 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531347 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.531409 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531421 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.531432 | orchestrator | 2025-09-19 17:03:50.531443 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 17:03:50.531454 | orchestrator | Friday 19 September 2025 16:52:52 +0000 (0:00:00.600) 0:00:04.797 ****** 2025-09-19 17:03:50.531464 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531475 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531513 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531524 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.531535 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531545 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.531556 | orchestrator | 2025-09-19 17:03:50.531567 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 17:03:50.531578 | orchestrator | Friday 19 September 2025 16:52:54 +0000 (0:00:01.158) 0:00:05.955 ****** 2025-09-19 17:03:50.531589 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531623 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531634 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531645 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.531655 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531666 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.531677 | orchestrator | 2025-09-19 17:03:50.531688 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 17:03:50.531700 | orchestrator | Friday 19 September 2025 16:52:54 +0000 (0:00:00.853) 0:00:06.809 ****** 2025-09-19 17:03:50.531711 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531722 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531747 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531758 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.531769 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531779 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.531790 | orchestrator | 2025-09-19 17:03:50.531801 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 17:03:50.531812 | orchestrator | Friday 19 September 2025 16:52:55 +0000 (0:00:00.891) 0:00:07.700 ****** 2025-09-19 17:03:50.531823 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.531834 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.531844 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.531889 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.531900 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.532023 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.532035 | orchestrator | 2025-09-19 17:03:50.532051 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 17:03:50.532070 | orchestrator | Friday 19 September 2025 16:52:56 +0000 (0:00:01.096) 0:00:08.797 ****** 2025-09-19 17:03:50.532088 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.532106 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.532124 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.532142 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.532203 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.532243 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.532257 | orchestrator | 2025-09-19 17:03:50.532268 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 17:03:50.532279 | orchestrator | Friday 19 September 2025 16:52:57 +0000 (0:00:00.753) 0:00:09.550 ****** 2025-09-19 17:03:50.532290 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.532301 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.532312 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.532323 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.532334 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.532377 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.532388 | orchestrator | 2025-09-19 17:03:50.532410 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 17:03:50.532422 | orchestrator | Friday 19 September 2025 16:52:58 +0000 (0:00:00.774) 0:00:10.325 ****** 2025-09-19 17:03:50.532433 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:03:50.532444 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.532455 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.532466 | orchestrator | 2025-09-19 17:03:50.532477 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 17:03:50.532573 | orchestrator | Friday 19 September 2025 16:52:59 +0000 (0:00:00.563) 0:00:10.889 ****** 2025-09-19 17:03:50.532641 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.532652 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.532719 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.532732 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.532743 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.532753 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.532764 | orchestrator | 2025-09-19 17:03:50.532791 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 17:03:50.532824 | orchestrator | Friday 19 September 2025 16:53:00 +0000 (0:00:01.016) 0:00:11.905 ****** 2025-09-19 17:03:50.532835 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:03:50.532867 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.532881 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.532892 | orchestrator | 2025-09-19 17:03:50.532903 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 17:03:50.532913 | orchestrator | Friday 19 September 2025 16:53:03 +0000 (0:00:03.406) 0:00:15.312 ****** 2025-09-19 17:03:50.532924 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 17:03:50.532935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 17:03:50.532946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 17:03:50.532957 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.532968 | orchestrator | 2025-09-19 17:03:50.532978 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 17:03:50.532989 | orchestrator | Friday 19 September 2025 16:53:04 +0000 (0:00:00.592) 0:00:15.905 ****** 2025-09-19 17:03:50.533003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533039 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.533050 | orchestrator | 2025-09-19 17:03:50.533061 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 17:03:50.533072 | orchestrator | Friday 19 September 2025 16:53:05 +0000 (0:00:01.191) 0:00:17.096 ****** 2025-09-19 17:03:50.533085 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533099 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533111 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533122 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.533133 | orchestrator | 2025-09-19 17:03:50.533144 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 17:03:50.533154 | orchestrator | Friday 19 September 2025 16:53:05 +0000 (0:00:00.408) 0:00:17.505 ****** 2025-09-19 17:03:50.533186 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 16:53:01.040122', 'end': '2025-09-19 16:53:01.435402', 'delta': '0:00:00.395280', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533206 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 16:53:02.140005', 'end': '2025-09-19 16:53:02.425208', 'delta': '0:00:00.285203', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533224 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 16:53:02.981804', 'end': '2025-09-19 16:53:03.257820', 'delta': '0:00:00.276016', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.533243 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.533262 | orchestrator | 2025-09-19 17:03:50.533281 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 17:03:50.533299 | orchestrator | Friday 19 September 2025 16:53:06 +0000 (0:00:00.590) 0:00:18.095 ****** 2025-09-19 17:03:50.533317 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.533335 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.533355 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.533374 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.533392 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.533410 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.533428 | orchestrator | 2025-09-19 17:03:50.533447 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 17:03:50.533466 | orchestrator | Friday 19 September 2025 16:53:08 +0000 (0:00:02.650) 0:00:20.745 ****** 2025-09-19 17:03:50.533485 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.533676 | orchestrator | 2025-09-19 17:03:50.533691 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 17:03:50.533702 | orchestrator | Friday 19 September 2025 16:53:10 +0000 (0:00:01.319) 0:00:22.065 ****** 2025-09-19 17:03:50.533819 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.533831 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.533841 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.533871 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.533882 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.533893 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.533904 | orchestrator | 2025-09-19 17:03:50.533925 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 17:03:50.533936 | orchestrator | Friday 19 September 2025 16:53:11 +0000 (0:00:01.624) 0:00:23.689 ****** 2025-09-19 17:03:50.533947 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.533958 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.533969 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.533979 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.533990 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534001 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534012 | orchestrator | 2025-09-19 17:03:50.534081 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 17:03:50.534093 | orchestrator | Friday 19 September 2025 16:53:13 +0000 (0:00:01.590) 0:00:25.280 ****** 2025-09-19 17:03:50.534104 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534115 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534126 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534137 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534148 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534158 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534169 | orchestrator | 2025-09-19 17:03:50.534180 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 17:03:50.534191 | orchestrator | Friday 19 September 2025 16:53:14 +0000 (0:00:00.996) 0:00:26.276 ****** 2025-09-19 17:03:50.534202 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534213 | orchestrator | 2025-09-19 17:03:50.534235 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 17:03:50.534246 | orchestrator | Friday 19 September 2025 16:53:14 +0000 (0:00:00.101) 0:00:26.377 ****** 2025-09-19 17:03:50.534280 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534291 | orchestrator | 2025-09-19 17:03:50.534302 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 17:03:50.534313 | orchestrator | Friday 19 September 2025 16:53:14 +0000 (0:00:00.203) 0:00:26.581 ****** 2025-09-19 17:03:50.534324 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534335 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534346 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534356 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534367 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534378 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534389 | orchestrator | 2025-09-19 17:03:50.534410 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 17:03:50.534429 | orchestrator | Friday 19 September 2025 16:53:15 +0000 (0:00:00.958) 0:00:27.539 ****** 2025-09-19 17:03:50.534441 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534451 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534462 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534473 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534505 | orchestrator | 2025-09-19 17:03:50.534516 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 17:03:50.534527 | orchestrator | Friday 19 September 2025 16:53:16 +0000 (0:00:00.823) 0:00:28.362 ****** 2025-09-19 17:03:50.534538 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534549 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534559 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534570 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534581 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534591 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534602 | orchestrator | 2025-09-19 17:03:50.534613 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 17:03:50.534624 | orchestrator | Friday 19 September 2025 16:53:17 +0000 (0:00:00.586) 0:00:28.948 ****** 2025-09-19 17:03:50.534643 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534654 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534665 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534675 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534686 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534697 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534707 | orchestrator | 2025-09-19 17:03:50.534719 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 17:03:50.534729 | orchestrator | Friday 19 September 2025 16:53:18 +0000 (0:00:01.138) 0:00:30.086 ****** 2025-09-19 17:03:50.534740 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534751 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534762 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534772 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534783 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534794 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534804 | orchestrator | 2025-09-19 17:03:50.534815 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 17:03:50.534826 | orchestrator | Friday 19 September 2025 16:53:19 +0000 (0:00:00.840) 0:00:30.927 ****** 2025-09-19 17:03:50.534837 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534877 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534889 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.534910 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.534921 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.534932 | orchestrator | 2025-09-19 17:03:50.534942 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 17:03:50.534953 | orchestrator | Friday 19 September 2025 16:53:19 +0000 (0:00:00.820) 0:00:31.748 ****** 2025-09-19 17:03:50.534964 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.534975 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.534985 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.534996 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.535007 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.535017 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.535028 | orchestrator | 2025-09-19 17:03:50.535039 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 17:03:50.535050 | orchestrator | Friday 19 September 2025 16:53:20 +0000 (0:00:00.705) 0:00:32.453 ****** 2025-09-19 17:03:50.535063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70', 'dm-uuid-LVM-Vg40vHetn4R56D6Ffi9uOciNR5oL0Yiiyh9RxQKWB6m8dBBSQ0ooWdkOFaYkE1WC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec', 'dm-uuid-LVM-noY6foXpitZX6cHQDHPdWcoWEE9GLEeGpeaCHFyUXan2usFgI5rj3Wakp48dwX55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535155 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2', 'dm-uuid-LVM-QcDdh1J2jxOs6tp7Oe4XT0Zz1JSjF5dI8NXygTe7o6IzXrm3Ci2oWGBt6XdcCnD9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7', 'dm-uuid-LVM-LDASMkfHr0khVCowCzXLcMatR1wSlg7UDRK7AXLK7sqvKBaj0TbHwVG9FYXRIj2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535270 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c9QEt0-HRgd-bY03-Jd9F-51yF-8rcZ-PPDFR4', 'scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd', 'scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535283 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535322 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0pOpv-Df9H-7sCV-gXAD-ztyf-iKEa-62mI36', 'scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363', 'scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535335 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5', 'scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535371 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535382 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535445 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535458 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GbVk8X-bjpf-wsn1-v0bH-HW56-9ucN-vMi0Ec', 'scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73', 'scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FyKxvb-dSUu-GGIj-2HDa-wyiz-cOn4-Bzoouk', 'scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd', 'scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535499 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2', 'dm-uuid-LVM-4hDC3ozcjstwWQ3E5UxqBrjJp5mz1cIfsVn5PTVRwsj0jMyjGmhIMfAIPNf2GBTF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3', 'scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38', 'dm-uuid-LVM-bBiflqjnftduSHS5XiwByNmPAVGwW9bI5l8qlrblgYE7PdOCmSNNWyQdESxdCPSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535556 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.535568 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535585 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535630 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WoixB1-M6Dg-l8nc-m7Vg-jCLx-Etkb-ybuhtE', 'scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e', 'scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535706 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-prhbwg-ulIC-7M5H-Gfur-Z1ct-zcRp-etKAGn', 'scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1', 'scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5', 'scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part1', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part14', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part15', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part16', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535897 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.535921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.535934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.535996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part1', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part14', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part15', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part16', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.536076 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.536145 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.536166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.536185 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.536207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:03:50.536389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part1', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part14', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part15', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part16', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.536441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:03:50.536465 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.536486 | orchestrator | 2025-09-19 17:03:50.536507 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 17:03:50.536526 | orchestrator | Friday 19 September 2025 16:53:21 +0000 (0:00:01.389) 0:00:33.843 ****** 2025-09-19 17:03:50.536547 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70', 'dm-uuid-LVM-Vg40vHetn4R56D6Ffi9uOciNR5oL0Yiiyh9RxQKWB6m8dBBSQ0ooWdkOFaYkE1WC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536571 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec', 'dm-uuid-LVM-noY6foXpitZX6cHQDHPdWcoWEE9GLEeGpeaCHFyUXan2usFgI5rj3Wakp48dwX55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536623 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536644 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536681 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536755 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536777 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536817 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536842 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c9QEt0-HRgd-bY03-Jd9F-51yF-8rcZ-PPDFR4', 'scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd', 'scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536940 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0pOpv-Df9H-7sCV-gXAD-ztyf-iKEa-62mI36', 'scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363', 'scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536961 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5', 'scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.536992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2', 'dm-uuid-LVM-QcDdh1J2jxOs6tp7Oe4XT0Zz1JSjF5dI8NXygTe7o6IzXrm3Ci2oWGBt6XdcCnD9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7', 'dm-uuid-LVM-LDASMkfHr0khVCowCzXLcMatR1wSlg7UDRK7AXLK7sqvKBaj0TbHwVG9FYXRIj2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537061 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2', 'dm-uuid-LVM-4hDC3ozcjstwWQ3E5UxqBrjJp5mz1cIfsVn5PTVRwsj0jMyjGmhIMfAIPNf2GBTF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537180 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537214 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38', 'dm-uuid-LVM-bBiflqjnftduSHS5XiwByNmPAVGwW9bI5l8qlrblgYE7PdOCmSNNWyQdESxdCPSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537266 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.537283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537345 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537356 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537377 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537388 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537404 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537414 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537425 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537435 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537445 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537466 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537477 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537498 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537515 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537531 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537617 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part1', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part14', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part15', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part16', 'scsi-SQEMU_QEMU_HARDDISK_67fe6e8c-959c-4183-b14f-1847ba00206a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537629 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537652 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537671 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537682 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WoixB1-M6Dg-l8nc-m7Vg-jCLx-Etkb-ybuhtE', 'scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e', 'scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537702 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-prhbwg-ulIC-7M5H-Gfur-Z1ct-zcRp-etKAGn', 'scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1', 'scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537718 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5', 'scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537739 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537749 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537773 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537794 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.537804 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.537814 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537825 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537835 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GbVk8X-bjpf-wsn1-v0bH-HW56-9ucN-vMi0Ec', 'scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73', 'scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537876 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537903 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FyKxvb-dSUu-GGIj-2HDa-wyiz-cOn4-Bzoouk', 'scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd', 'scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537914 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537924 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537933 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3', 'scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.537943 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538193 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538238 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538252 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538268 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538285 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538301 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.538317 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538333 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538391 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part1', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part14', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part15', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part16', 'scsi-SQEMU_QEMU_HARDDISK_716b3d72-a126-4679-914a-2f4586f413fc-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538405 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538415 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.538425 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538452 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538463 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538474 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part1', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part14', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part15', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part16', 'scsi-SQEMU_QEMU_HARDDISK_91605ab8-a526-47f7-b42b-efc568288447-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538485 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:03:50.538501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.538510 | orchestrator | 2025-09-19 17:03:50.538521 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 17:03:50.538532 | orchestrator | Friday 19 September 2025 16:53:23 +0000 (0:00:01.323) 0:00:35.167 ****** 2025-09-19 17:03:50.538547 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.538557 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.538571 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.538581 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.538591 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.538601 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.538616 | orchestrator | 2025-09-19 17:03:50.538632 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 17:03:50.538648 | orchestrator | Friday 19 September 2025 16:53:24 +0000 (0:00:01.622) 0:00:36.789 ****** 2025-09-19 17:03:50.538664 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.538679 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.538694 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.538710 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.538726 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.538743 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.538759 | orchestrator | 2025-09-19 17:03:50.538776 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 17:03:50.538793 | orchestrator | Friday 19 September 2025 16:53:25 +0000 (0:00:00.849) 0:00:37.638 ****** 2025-09-19 17:03:50.538810 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.538825 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.538835 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.538845 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.538920 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.538930 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.538939 | orchestrator | 2025-09-19 17:03:50.538949 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 17:03:50.538959 | orchestrator | Friday 19 September 2025 16:53:27 +0000 (0:00:01.291) 0:00:38.930 ****** 2025-09-19 17:03:50.538968 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.538978 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.538988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.538997 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.539006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.539016 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.539024 | orchestrator | 2025-09-19 17:03:50.539032 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 17:03:50.539040 | orchestrator | Friday 19 September 2025 16:53:28 +0000 (0:00:01.093) 0:00:40.024 ****** 2025-09-19 17:03:50.539048 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539056 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539063 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.539079 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.539086 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.539094 | orchestrator | 2025-09-19 17:03:50.539102 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 17:03:50.539110 | orchestrator | Friday 19 September 2025 16:53:29 +0000 (0:00:01.341) 0:00:41.365 ****** 2025-09-19 17:03:50.539126 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539141 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539149 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.539157 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.539164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.539172 | orchestrator | 2025-09-19 17:03:50.539180 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 17:03:50.539188 | orchestrator | Friday 19 September 2025 16:53:30 +0000 (0:00:01.493) 0:00:42.859 ****** 2025-09-19 17:03:50.539196 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 17:03:50.539204 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 17:03:50.539212 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 17:03:50.539220 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 17:03:50.539232 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 17:03:50.539245 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 17:03:50.539259 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 17:03:50.539272 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 17:03:50.539284 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-19 17:03:50.539296 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 17:03:50.539309 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-19 17:03:50.539321 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 17:03:50.539332 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 17:03:50.539343 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-19 17:03:50.539355 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 17:03:50.539367 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-19 17:03:50.539379 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-19 17:03:50.539391 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-19 17:03:50.539402 | orchestrator | 2025-09-19 17:03:50.539415 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 17:03:50.539427 | orchestrator | Friday 19 September 2025 16:53:35 +0000 (0:00:04.414) 0:00:47.274 ****** 2025-09-19 17:03:50.539439 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 17:03:50.539451 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 17:03:50.539464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 17:03:50.539477 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539491 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 17:03:50.539503 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 17:03:50.539516 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 17:03:50.539528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 17:03:50.539542 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 17:03:50.539555 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 17:03:50.539578 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539594 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 17:03:50.539602 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 17:03:50.539610 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 17:03:50.539618 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539626 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.539633 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-19 17:03:50.539641 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-19 17:03:50.539649 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-19 17:03:50.539664 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.539672 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-19 17:03:50.539679 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-19 17:03:50.539687 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-19 17:03:50.539695 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.539702 | orchestrator | 2025-09-19 17:03:50.539710 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 17:03:50.539718 | orchestrator | Friday 19 September 2025 16:53:36 +0000 (0:00:01.029) 0:00:48.303 ****** 2025-09-19 17:03:50.539726 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.539734 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.539741 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.539750 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.539758 | orchestrator | 2025-09-19 17:03:50.539766 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 17:03:50.539774 | orchestrator | Friday 19 September 2025 16:53:37 +0000 (0:00:01.254) 0:00:49.558 ****** 2025-09-19 17:03:50.539782 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539790 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539797 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539805 | orchestrator | 2025-09-19 17:03:50.539813 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 17:03:50.539821 | orchestrator | Friday 19 September 2025 16:53:38 +0000 (0:00:00.532) 0:00:50.090 ****** 2025-09-19 17:03:50.539829 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539836 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539844 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539869 | orchestrator | 2025-09-19 17:03:50.539877 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 17:03:50.539885 | orchestrator | Friday 19 September 2025 16:53:38 +0000 (0:00:00.654) 0:00:50.745 ****** 2025-09-19 17:03:50.539893 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.539901 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.539909 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.539917 | orchestrator | 2025-09-19 17:03:50.539925 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 17:03:50.539933 | orchestrator | Friday 19 September 2025 16:53:39 +0000 (0:00:00.836) 0:00:51.582 ****** 2025-09-19 17:03:50.539941 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.539949 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.539957 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.539965 | orchestrator | 2025-09-19 17:03:50.539973 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 17:03:50.539981 | orchestrator | Friday 19 September 2025 16:53:40 +0000 (0:00:00.786) 0:00:52.368 ****** 2025-09-19 17:03:50.539989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.539997 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.540005 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.540013 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540020 | orchestrator | 2025-09-19 17:03:50.540028 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 17:03:50.540037 | orchestrator | Friday 19 September 2025 16:53:40 +0000 (0:00:00.392) 0:00:52.761 ****** 2025-09-19 17:03:50.540045 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.540053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.540061 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.540069 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540082 | orchestrator | 2025-09-19 17:03:50.540090 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 17:03:50.540098 | orchestrator | Friday 19 September 2025 16:53:41 +0000 (0:00:00.427) 0:00:53.188 ****** 2025-09-19 17:03:50.540105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.540114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.540122 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.540129 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540137 | orchestrator | 2025-09-19 17:03:50.540145 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 17:03:50.540153 | orchestrator | Friday 19 September 2025 16:53:41 +0000 (0:00:00.529) 0:00:53.718 ****** 2025-09-19 17:03:50.540161 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.540169 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.540177 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.540185 | orchestrator | 2025-09-19 17:03:50.540193 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 17:03:50.540201 | orchestrator | Friday 19 September 2025 16:53:42 +0000 (0:00:00.383) 0:00:54.101 ****** 2025-09-19 17:03:50.540209 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 17:03:50.540217 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 17:03:50.540225 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 17:03:50.540232 | orchestrator | 2025-09-19 17:03:50.540245 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 17:03:50.540261 | orchestrator | Friday 19 September 2025 16:53:42 +0000 (0:00:00.742) 0:00:54.844 ****** 2025-09-19 17:03:50.540270 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:03:50.540278 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.540286 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.540293 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 17:03:50.540301 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 17:03:50.540309 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 17:03:50.540317 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 17:03:50.540324 | orchestrator | 2025-09-19 17:03:50.540332 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 17:03:50.540340 | orchestrator | Friday 19 September 2025 16:53:43 +0000 (0:00:00.678) 0:00:55.523 ****** 2025-09-19 17:03:50.540348 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:03:50.540356 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.540363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.540371 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 17:03:50.540379 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 17:03:50.540387 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 17:03:50.540394 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 17:03:50.540402 | orchestrator | 2025-09-19 17:03:50.540410 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.540418 | orchestrator | Friday 19 September 2025 16:53:45 +0000 (0:00:01.880) 0:00:57.403 ****** 2025-09-19 17:03:50.540426 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.540434 | orchestrator | 2025-09-19 17:03:50.540442 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.540455 | orchestrator | Friday 19 September 2025 16:53:47 +0000 (0:00:01.605) 0:00:59.009 ****** 2025-09-19 17:03:50.540463 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.540471 | orchestrator | 2025-09-19 17:03:50.540478 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.540486 | orchestrator | Friday 19 September 2025 16:53:49 +0000 (0:00:01.918) 0:01:00.927 ****** 2025-09-19 17:03:50.540494 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540502 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.540510 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.540518 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.540525 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.540533 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.540541 | orchestrator | 2025-09-19 17:03:50.540549 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.540556 | orchestrator | Friday 19 September 2025 16:53:50 +0000 (0:00:01.486) 0:01:02.413 ****** 2025-09-19 17:03:50.540564 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.540572 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.540580 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.540587 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.540595 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.540603 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.540610 | orchestrator | 2025-09-19 17:03:50.540618 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.540626 | orchestrator | Friday 19 September 2025 16:53:51 +0000 (0:00:00.925) 0:01:03.338 ****** 2025-09-19 17:03:50.540634 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.540642 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.540649 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.540657 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.540665 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.540672 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.540680 | orchestrator | 2025-09-19 17:03:50.540688 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.540696 | orchestrator | Friday 19 September 2025 16:53:52 +0000 (0:00:01.101) 0:01:04.440 ****** 2025-09-19 17:03:50.540703 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.540711 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.540719 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.540727 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.540734 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.540742 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.540750 | orchestrator | 2025-09-19 17:03:50.540758 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.540766 | orchestrator | Friday 19 September 2025 16:53:53 +0000 (0:00:01.346) 0:01:05.786 ****** 2025-09-19 17:03:50.540773 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540781 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.540789 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.540797 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.540804 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.540812 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.540820 | orchestrator | 2025-09-19 17:03:50.540828 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.540839 | orchestrator | Friday 19 September 2025 16:53:55 +0000 (0:00:01.302) 0:01:07.089 ****** 2025-09-19 17:03:50.540867 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540876 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.540884 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.540892 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.540905 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.540913 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.540920 | orchestrator | 2025-09-19 17:03:50.540928 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.540937 | orchestrator | Friday 19 September 2025 16:53:55 +0000 (0:00:00.543) 0:01:07.633 ****** 2025-09-19 17:03:50.540944 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.540952 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.540960 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.540968 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.540976 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.540984 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.540991 | orchestrator | 2025-09-19 17:03:50.540999 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.541007 | orchestrator | Friday 19 September 2025 16:53:56 +0000 (0:00:00.580) 0:01:08.213 ****** 2025-09-19 17:03:50.541015 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541023 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541031 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541039 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541047 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541055 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541062 | orchestrator | 2025-09-19 17:03:50.541070 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.541078 | orchestrator | Friday 19 September 2025 16:53:57 +0000 (0:00:01.208) 0:01:09.421 ****** 2025-09-19 17:03:50.541086 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541094 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541102 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541110 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541117 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541125 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541133 | orchestrator | 2025-09-19 17:03:50.541141 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.541149 | orchestrator | Friday 19 September 2025 16:53:58 +0000 (0:00:00.985) 0:01:10.407 ****** 2025-09-19 17:03:50.541157 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.541165 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.541173 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.541180 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541188 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541196 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541204 | orchestrator | 2025-09-19 17:03:50.541212 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.541220 | orchestrator | Friday 19 September 2025 16:53:59 +0000 (0:00:00.776) 0:01:11.183 ****** 2025-09-19 17:03:50.541228 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.541236 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.541243 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.541251 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541259 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541267 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541275 | orchestrator | 2025-09-19 17:03:50.541283 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.541291 | orchestrator | Friday 19 September 2025 16:53:59 +0000 (0:00:00.589) 0:01:11.773 ****** 2025-09-19 17:03:50.541299 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541307 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541314 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541322 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541330 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541346 | orchestrator | 2025-09-19 17:03:50.541354 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.541362 | orchestrator | Friday 19 September 2025 16:54:00 +0000 (0:00:00.694) 0:01:12.467 ****** 2025-09-19 17:03:50.541373 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541381 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541389 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541397 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541405 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541413 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541420 | orchestrator | 2025-09-19 17:03:50.541428 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.541436 | orchestrator | Friday 19 September 2025 16:54:01 +0000 (0:00:00.616) 0:01:13.084 ****** 2025-09-19 17:03:50.541444 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541452 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541460 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541476 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541483 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541491 | orchestrator | 2025-09-19 17:03:50.541500 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.541507 | orchestrator | Friday 19 September 2025 16:54:02 +0000 (0:00:00.801) 0:01:13.886 ****** 2025-09-19 17:03:50.541515 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.541523 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.541531 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.541539 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541554 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541562 | orchestrator | 2025-09-19 17:03:50.541570 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.541578 | orchestrator | Friday 19 September 2025 16:54:02 +0000 (0:00:00.954) 0:01:14.840 ****** 2025-09-19 17:03:50.541586 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.541594 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.541601 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.541609 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.541617 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.541625 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.541633 | orchestrator | 2025-09-19 17:03:50.541644 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.541656 | orchestrator | Friday 19 September 2025 16:54:03 +0000 (0:00:00.900) 0:01:15.741 ****** 2025-09-19 17:03:50.541664 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.541672 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.541679 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.541687 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541695 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541703 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541710 | orchestrator | 2025-09-19 17:03:50.541718 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.541726 | orchestrator | Friday 19 September 2025 16:54:04 +0000 (0:00:00.683) 0:01:16.425 ****** 2025-09-19 17:03:50.541734 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541742 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541749 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541757 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541765 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541772 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541780 | orchestrator | 2025-09-19 17:03:50.541788 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.541796 | orchestrator | Friday 19 September 2025 16:54:05 +0000 (0:00:01.009) 0:01:17.434 ****** 2025-09-19 17:03:50.541804 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.541816 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.541830 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.541898 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.541913 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.541926 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.541941 | orchestrator | 2025-09-19 17:03:50.541955 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-19 17:03:50.541968 | orchestrator | Friday 19 September 2025 16:54:06 +0000 (0:00:01.197) 0:01:18.632 ****** 2025-09-19 17:03:50.541980 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.541989 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.541996 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.542004 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.542012 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.542058 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.542066 | orchestrator | 2025-09-19 17:03:50.542075 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-19 17:03:50.542083 | orchestrator | Friday 19 September 2025 16:54:08 +0000 (0:00:01.844) 0:01:20.476 ****** 2025-09-19 17:03:50.542091 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.542098 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.542106 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.542114 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.542122 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.542130 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.542138 | orchestrator | 2025-09-19 17:03:50.542145 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-19 17:03:50.542153 | orchestrator | Friday 19 September 2025 16:54:10 +0000 (0:00:02.139) 0:01:22.616 ****** 2025-09-19 17:03:50.542161 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.542167 | orchestrator | 2025-09-19 17:03:50.542174 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-19 17:03:50.542181 | orchestrator | Friday 19 September 2025 16:54:11 +0000 (0:00:00.974) 0:01:23.590 ****** 2025-09-19 17:03:50.542187 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542194 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542201 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542207 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542214 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542221 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542227 | orchestrator | 2025-09-19 17:03:50.542234 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-19 17:03:50.542241 | orchestrator | Friday 19 September 2025 16:54:12 +0000 (0:00:00.516) 0:01:24.107 ****** 2025-09-19 17:03:50.542247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542254 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542261 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542267 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542280 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542287 | orchestrator | 2025-09-19 17:03:50.542293 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-19 17:03:50.542300 | orchestrator | Friday 19 September 2025 16:54:12 +0000 (0:00:00.637) 0:01:24.745 ****** 2025-09-19 17:03:50.542307 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542314 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542320 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542327 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542333 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542340 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-19 17:03:50.542352 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542359 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542366 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542372 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542379 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542399 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-19 17:03:50.542406 | orchestrator | 2025-09-19 17:03:50.542417 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-19 17:03:50.542423 | orchestrator | Friday 19 September 2025 16:54:14 +0000 (0:00:01.274) 0:01:26.019 ****** 2025-09-19 17:03:50.542430 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.542437 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.542443 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.542450 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.542457 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.542463 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.542470 | orchestrator | 2025-09-19 17:03:50.542477 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-19 17:03:50.542483 | orchestrator | Friday 19 September 2025 16:54:15 +0000 (0:00:00.990) 0:01:27.010 ****** 2025-09-19 17:03:50.542490 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542497 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542503 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542510 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542516 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542523 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542530 | orchestrator | 2025-09-19 17:03:50.542536 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-19 17:03:50.542543 | orchestrator | Friday 19 September 2025 16:54:15 +0000 (0:00:00.510) 0:01:27.521 ****** 2025-09-19 17:03:50.542550 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542556 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542563 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542576 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542589 | orchestrator | 2025-09-19 17:03:50.542596 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-19 17:03:50.542602 | orchestrator | Friday 19 September 2025 16:54:16 +0000 (0:00:00.622) 0:01:28.143 ****** 2025-09-19 17:03:50.542609 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542616 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542622 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542629 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542635 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542642 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542648 | orchestrator | 2025-09-19 17:03:50.542655 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-19 17:03:50.542662 | orchestrator | Friday 19 September 2025 16:54:16 +0000 (0:00:00.497) 0:01:28.641 ****** 2025-09-19 17:03:50.542669 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.542675 | orchestrator | 2025-09-19 17:03:50.542682 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-19 17:03:50.542689 | orchestrator | Friday 19 September 2025 16:54:17 +0000 (0:00:01.058) 0:01:29.699 ****** 2025-09-19 17:03:50.542700 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.542707 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.542713 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.542720 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.542726 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.542733 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.542739 | orchestrator | 2025-09-19 17:03:50.542746 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-19 17:03:50.542753 | orchestrator | Friday 19 September 2025 16:55:03 +0000 (0:00:45.319) 0:02:15.019 ****** 2025-09-19 17:03:50.542760 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542766 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542773 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542779 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542786 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542792 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542799 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542806 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542812 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542819 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542825 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542832 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542839 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542845 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542864 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542871 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542877 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542884 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542891 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542897 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542904 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-19 17:03:50.542914 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-19 17:03:50.542924 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-19 17:03:50.542931 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.542938 | orchestrator | 2025-09-19 17:03:50.542945 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-19 17:03:50.542951 | orchestrator | Friday 19 September 2025 16:55:03 +0000 (0:00:00.720) 0:02:15.740 ****** 2025-09-19 17:03:50.542958 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.542965 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.542971 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.542978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.542985 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.542991 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543036 | orchestrator | 2025-09-19 17:03:50.543043 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-19 17:03:50.543050 | orchestrator | Friday 19 September 2025 16:55:04 +0000 (0:00:00.768) 0:02:16.509 ****** 2025-09-19 17:03:50.543057 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543064 | orchestrator | 2025-09-19 17:03:50.543070 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-19 17:03:50.543082 | orchestrator | Friday 19 September 2025 16:55:04 +0000 (0:00:00.148) 0:02:16.658 ****** 2025-09-19 17:03:50.543089 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543095 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543102 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543108 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543115 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543128 | orchestrator | 2025-09-19 17:03:50.543135 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-19 17:03:50.543141 | orchestrator | Friday 19 September 2025 16:55:05 +0000 (0:00:00.515) 0:02:17.173 ****** 2025-09-19 17:03:50.543148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543155 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543161 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543168 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543174 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543181 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543187 | orchestrator | 2025-09-19 17:03:50.543194 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-19 17:03:50.543201 | orchestrator | Friday 19 September 2025 16:55:06 +0000 (0:00:00.723) 0:02:17.896 ****** 2025-09-19 17:03:50.543207 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543214 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543221 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543234 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543240 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543247 | orchestrator | 2025-09-19 17:03:50.543253 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-19 17:03:50.543260 | orchestrator | Friday 19 September 2025 16:55:06 +0000 (0:00:00.620) 0:02:18.517 ****** 2025-09-19 17:03:50.543267 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.543273 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.543280 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.543287 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.543293 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.543300 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.543306 | orchestrator | 2025-09-19 17:03:50.543313 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-19 17:03:50.543320 | orchestrator | Friday 19 September 2025 16:55:09 +0000 (0:00:02.576) 0:02:21.094 ****** 2025-09-19 17:03:50.543326 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.543333 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.543340 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.543346 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.543353 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.543359 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.543366 | orchestrator | 2025-09-19 17:03:50.543373 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-19 17:03:50.543379 | orchestrator | Friday 19 September 2025 16:55:09 +0000 (0:00:00.648) 0:02:21.742 ****** 2025-09-19 17:03:50.543386 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.543394 | orchestrator | 2025-09-19 17:03:50.543401 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-19 17:03:50.543408 | orchestrator | Friday 19 September 2025 16:55:10 +0000 (0:00:01.023) 0:02:22.766 ****** 2025-09-19 17:03:50.543414 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543421 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543427 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543434 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543445 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543451 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543458 | orchestrator | 2025-09-19 17:03:50.543465 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-19 17:03:50.543471 | orchestrator | Friday 19 September 2025 16:55:11 +0000 (0:00:00.522) 0:02:23.288 ****** 2025-09-19 17:03:50.543478 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543485 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543491 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543498 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543504 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543511 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543518 | orchestrator | 2025-09-19 17:03:50.543525 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-19 17:03:50.543531 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.746) 0:02:24.034 ****** 2025-09-19 17:03:50.543538 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543544 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543551 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543558 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543564 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543574 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543581 | orchestrator | 2025-09-19 17:03:50.543592 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-19 17:03:50.543599 | orchestrator | Friday 19 September 2025 16:55:12 +0000 (0:00:00.519) 0:02:24.554 ****** 2025-09-19 17:03:50.543605 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543612 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543619 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543625 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543632 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543638 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543645 | orchestrator | 2025-09-19 17:03:50.543652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-19 17:03:50.543658 | orchestrator | Friday 19 September 2025 16:55:13 +0000 (0:00:00.784) 0:02:25.338 ****** 2025-09-19 17:03:50.543665 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543672 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543678 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543685 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543692 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543698 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543705 | orchestrator | 2025-09-19 17:03:50.543712 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-19 17:03:50.543718 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.717) 0:02:26.055 ****** 2025-09-19 17:03:50.543725 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543732 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543738 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543745 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543751 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543758 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543765 | orchestrator | 2025-09-19 17:03:50.543772 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-19 17:03:50.543778 | orchestrator | Friday 19 September 2025 16:55:14 +0000 (0:00:00.799) 0:02:26.854 ****** 2025-09-19 17:03:50.543785 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543792 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543798 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543805 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543818 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543824 | orchestrator | 2025-09-19 17:03:50.543831 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-19 17:03:50.543843 | orchestrator | Friday 19 September 2025 16:55:15 +0000 (0:00:00.682) 0:02:27.537 ****** 2025-09-19 17:03:50.543864 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.543870 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.543877 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.543884 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.543890 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.543897 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.543903 | orchestrator | 2025-09-19 17:03:50.543910 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-19 17:03:50.543916 | orchestrator | Friday 19 September 2025 16:55:16 +0000 (0:00:00.821) 0:02:28.359 ****** 2025-09-19 17:03:50.543923 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.543929 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.543936 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.543942 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.543949 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.543955 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.543962 | orchestrator | 2025-09-19 17:03:50.543969 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-19 17:03:50.543975 | orchestrator | Friday 19 September 2025 16:55:17 +0000 (0:00:01.154) 0:02:29.513 ****** 2025-09-19 17:03:50.543982 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.543988 | orchestrator | 2025-09-19 17:03:50.543995 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-19 17:03:50.544002 | orchestrator | Friday 19 September 2025 16:55:18 +0000 (0:00:01.014) 0:02:30.527 ****** 2025-09-19 17:03:50.544008 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-19 17:03:50.544015 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-19 17:03:50.544022 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-19 17:03:50.544028 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-19 17:03:50.544035 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-19 17:03:50.544041 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544048 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544054 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-19 17:03:50.544061 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544067 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544074 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544080 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544087 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544093 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-19 17:03:50.544100 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544106 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544113 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544119 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544126 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544132 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-19 17:03:50.544142 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544155 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544162 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544169 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-19 17:03:50.544186 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544199 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544210 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544221 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544232 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544242 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-19 17:03:50.544253 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544264 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544275 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544283 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544290 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544297 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544303 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-19 17:03:50.544311 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544323 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544334 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544344 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544356 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-19 17:03:50.544367 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544378 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544412 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544421 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-19 17:03:50.544432 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544443 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544453 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544464 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544474 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544485 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-19 17:03:50.544495 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544506 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544517 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544529 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544540 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544552 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-19 17:03:50.544563 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544574 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544585 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544596 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544606 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544617 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-19 17:03:50.544640 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544651 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544673 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544685 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544694 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544701 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544707 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544714 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-19 17:03:50.544727 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544733 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544745 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544756 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544763 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544770 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-19 17:03:50.544777 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-19 17:03:50.544783 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544790 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-19 17:03:50.544797 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-19 17:03:50.544804 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-19 17:03:50.544810 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-19 17:03:50.544817 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-19 17:03:50.544823 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-19 17:03:50.544830 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-19 17:03:50.544837 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-19 17:03:50.544843 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-19 17:03:50.544866 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-19 17:03:50.544873 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-19 17:03:50.544879 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-19 17:03:50.544886 | orchestrator | 2025-09-19 17:03:50.544893 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-19 17:03:50.544900 | orchestrator | Friday 19 September 2025 16:55:25 +0000 (0:00:06.835) 0:02:37.362 ****** 2025-09-19 17:03:50.544906 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.544913 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.544920 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.544927 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.544934 | orchestrator | 2025-09-19 17:03:50.544941 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-19 17:03:50.544947 | orchestrator | Friday 19 September 2025 16:55:26 +0000 (0:00:00.937) 0:02:38.299 ****** 2025-09-19 17:03:50.544954 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.544966 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.544973 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.544980 | orchestrator | 2025-09-19 17:03:50.544987 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-19 17:03:50.544994 | orchestrator | Friday 19 September 2025 16:55:27 +0000 (0:00:00.672) 0:02:38.972 ****** 2025-09-19 17:03:50.545000 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545007 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545014 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545021 | orchestrator | 2025-09-19 17:03:50.545028 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-19 17:03:50.545035 | orchestrator | Friday 19 September 2025 16:55:28 +0000 (0:00:01.620) 0:02:40.593 ****** 2025-09-19 17:03:50.545041 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.545048 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.545055 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.545062 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545068 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545075 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545082 | orchestrator | 2025-09-19 17:03:50.545089 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-19 17:03:50.545095 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.522) 0:02:41.115 ****** 2025-09-19 17:03:50.545102 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.545109 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.545116 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.545122 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545129 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545136 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545143 | orchestrator | 2025-09-19 17:03:50.545149 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-19 17:03:50.545156 | orchestrator | Friday 19 September 2025 16:55:29 +0000 (0:00:00.721) 0:02:41.837 ****** 2025-09-19 17:03:50.545163 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545176 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545183 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545189 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545196 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545203 | orchestrator | 2025-09-19 17:03:50.545210 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-19 17:03:50.545216 | orchestrator | Friday 19 September 2025 16:55:30 +0000 (0:00:00.566) 0:02:42.403 ****** 2025-09-19 17:03:50.545226 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545233 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545244 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545251 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545257 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545264 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545271 | orchestrator | 2025-09-19 17:03:50.545278 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-19 17:03:50.545284 | orchestrator | Friday 19 September 2025 16:55:31 +0000 (0:00:00.487) 0:02:42.891 ****** 2025-09-19 17:03:50.545291 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545298 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545305 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545322 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545329 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545335 | orchestrator | 2025-09-19 17:03:50.545342 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-19 17:03:50.545349 | orchestrator | Friday 19 September 2025 16:55:32 +0000 (0:00:01.068) 0:02:43.960 ****** 2025-09-19 17:03:50.545356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545362 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545369 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545376 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545382 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545389 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545396 | orchestrator | 2025-09-19 17:03:50.545403 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-19 17:03:50.545409 | orchestrator | Friday 19 September 2025 16:55:32 +0000 (0:00:00.677) 0:02:44.637 ****** 2025-09-19 17:03:50.545416 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545423 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545430 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545436 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545443 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545450 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545456 | orchestrator | 2025-09-19 17:03:50.545463 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-19 17:03:50.545470 | orchestrator | Friday 19 September 2025 16:55:33 +0000 (0:00:00.838) 0:02:45.476 ****** 2025-09-19 17:03:50.545477 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545483 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545490 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545497 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545510 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545517 | orchestrator | 2025-09-19 17:03:50.545523 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-19 17:03:50.545530 | orchestrator | Friday 19 September 2025 16:55:34 +0000 (0:00:00.546) 0:02:46.023 ****** 2025-09-19 17:03:50.545537 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545543 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545550 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545557 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.545563 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.545570 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.545577 | orchestrator | 2025-09-19 17:03:50.545584 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-19 17:03:50.545590 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:03.199) 0:02:49.222 ****** 2025-09-19 17:03:50.545597 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.545604 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.545611 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.545617 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545624 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545631 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545637 | orchestrator | 2025-09-19 17:03:50.545644 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-19 17:03:50.545651 | orchestrator | Friday 19 September 2025 16:55:37 +0000 (0:00:00.635) 0:02:49.858 ****** 2025-09-19 17:03:50.545658 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.545665 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.545671 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.545678 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545685 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545696 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545702 | orchestrator | 2025-09-19 17:03:50.545709 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-19 17:03:50.545716 | orchestrator | Friday 19 September 2025 16:55:39 +0000 (0:00:01.116) 0:02:50.974 ****** 2025-09-19 17:03:50.545723 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545729 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545736 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545743 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545749 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545756 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545762 | orchestrator | 2025-09-19 17:03:50.545769 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-19 17:03:50.545776 | orchestrator | Friday 19 September 2025 16:55:39 +0000 (0:00:00.828) 0:02:51.802 ****** 2025-09-19 17:03:50.545783 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545790 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545796 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.545803 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545810 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545817 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545823 | orchestrator | 2025-09-19 17:03:50.545834 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-19 17:03:50.545843 | orchestrator | Friday 19 September 2025 16:55:40 +0000 (0:00:00.753) 0:02:52.556 ****** 2025-09-19 17:03:50.545864 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-19 17:03:50.545873 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-19 17:03:50.545881 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545888 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-19 17:03:50.545895 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-19 17:03:50.545902 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-19 17:03:50.545909 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545916 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-19 17:03:50.545927 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545934 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.545941 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.545947 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.545954 | orchestrator | 2025-09-19 17:03:50.545961 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-19 17:03:50.545968 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:00.503) 0:02:53.060 ****** 2025-09-19 17:03:50.545974 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.545981 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.545988 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.545995 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546001 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546035 | orchestrator | 2025-09-19 17:03:50.546044 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-19 17:03:50.546051 | orchestrator | Friday 19 September 2025 16:55:41 +0000 (0:00:00.659) 0:02:53.719 ****** 2025-09-19 17:03:50.546058 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546064 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.546071 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.546078 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546084 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546090 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546097 | orchestrator | 2025-09-19 17:03:50.546132 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 17:03:50.546139 | orchestrator | Friday 19 September 2025 16:55:42 +0000 (0:00:00.552) 0:02:54.272 ****** 2025-09-19 17:03:50.546146 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546153 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.546159 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.546166 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546172 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546179 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546185 | orchestrator | 2025-09-19 17:03:50.546192 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 17:03:50.546199 | orchestrator | Friday 19 September 2025 16:55:43 +0000 (0:00:00.899) 0:02:55.172 ****** 2025-09-19 17:03:50.546205 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546212 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.546218 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.546225 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546232 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546238 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546245 | orchestrator | 2025-09-19 17:03:50.546252 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 17:03:50.546258 | orchestrator | Friday 19 September 2025 16:55:43 +0000 (0:00:00.601) 0:02:55.773 ****** 2025-09-19 17:03:50.546265 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546281 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.546288 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.546298 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546305 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546312 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546318 | orchestrator | 2025-09-19 17:03:50.546325 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 17:03:50.546332 | orchestrator | Friday 19 September 2025 16:55:44 +0000 (0:00:00.703) 0:02:56.477 ****** 2025-09-19 17:03:50.546339 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.546345 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.546352 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.546358 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546370 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546376 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546383 | orchestrator | 2025-09-19 17:03:50.546390 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 17:03:50.546396 | orchestrator | Friday 19 September 2025 16:55:45 +0000 (0:00:01.180) 0:02:57.657 ****** 2025-09-19 17:03:50.546403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.546410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.546416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.546423 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546430 | orchestrator | 2025-09-19 17:03:50.546436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 17:03:50.546443 | orchestrator | Friday 19 September 2025 16:55:46 +0000 (0:00:00.368) 0:02:58.026 ****** 2025-09-19 17:03:50.546450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.546456 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.546463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.546469 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546476 | orchestrator | 2025-09-19 17:03:50.546483 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 17:03:50.546489 | orchestrator | Friday 19 September 2025 16:55:46 +0000 (0:00:00.571) 0:02:58.597 ****** 2025-09-19 17:03:50.546496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.546503 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.546509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.546516 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546522 | orchestrator | 2025-09-19 17:03:50.546529 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 17:03:50.546536 | orchestrator | Friday 19 September 2025 16:55:47 +0000 (0:00:00.603) 0:02:59.201 ****** 2025-09-19 17:03:50.546542 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.546549 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.546556 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.546563 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546576 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546582 | orchestrator | 2025-09-19 17:03:50.546589 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 17:03:50.546596 | orchestrator | Friday 19 September 2025 16:55:48 +0000 (0:00:00.679) 0:02:59.880 ****** 2025-09-19 17:03:50.546602 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 17:03:50.546609 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 17:03:50.546616 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-19 17:03:50.546622 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.546629 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-19 17:03:50.546636 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-19 17:03:50.546642 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.546649 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.546655 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 17:03:50.546662 | orchestrator | 2025-09-19 17:03:50.546669 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-19 17:03:50.546676 | orchestrator | Friday 19 September 2025 16:55:49 +0000 (0:00:01.969) 0:03:01.849 ****** 2025-09-19 17:03:50.546682 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.546689 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.546695 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.546702 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.546709 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.546719 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.546726 | orchestrator | 2025-09-19 17:03:50.546733 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.546739 | orchestrator | Friday 19 September 2025 16:55:52 +0000 (0:00:02.363) 0:03:04.212 ****** 2025-09-19 17:03:50.546746 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.546753 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.546759 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.546766 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.546772 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.546779 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.546786 | orchestrator | 2025-09-19 17:03:50.546792 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 17:03:50.546799 | orchestrator | Friday 19 September 2025 16:55:53 +0000 (0:00:01.564) 0:03:05.777 ****** 2025-09-19 17:03:50.546806 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.546812 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.546819 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.546826 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.546832 | orchestrator | 2025-09-19 17:03:50.546839 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 17:03:50.546880 | orchestrator | Friday 19 September 2025 16:55:54 +0000 (0:00:00.939) 0:03:06.716 ****** 2025-09-19 17:03:50.546888 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.546895 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.546902 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.546909 | orchestrator | 2025-09-19 17:03:50.546924 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 17:03:50.546931 | orchestrator | Friday 19 September 2025 16:55:55 +0000 (0:00:00.342) 0:03:07.059 ****** 2025-09-19 17:03:50.546938 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.546944 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.546951 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.546957 | orchestrator | 2025-09-19 17:03:50.546963 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 17:03:50.546969 | orchestrator | Friday 19 September 2025 16:55:56 +0000 (0:00:01.220) 0:03:08.279 ****** 2025-09-19 17:03:50.546975 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 17:03:50.546982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 17:03:50.546988 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 17:03:50.546994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.547000 | orchestrator | 2025-09-19 17:03:50.547006 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 17:03:50.547013 | orchestrator | Friday 19 September 2025 16:55:57 +0000 (0:00:00.817) 0:03:09.097 ****** 2025-09-19 17:03:50.547019 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.547025 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.547031 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.547037 | orchestrator | 2025-09-19 17:03:50.547043 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 17:03:50.547050 | orchestrator | Friday 19 September 2025 16:55:57 +0000 (0:00:00.343) 0:03:09.440 ****** 2025-09-19 17:03:50.547056 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.547062 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.547068 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.547074 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.547081 | orchestrator | 2025-09-19 17:03:50.547087 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 17:03:50.547093 | orchestrator | Friday 19 September 2025 16:55:58 +0000 (0:00:01.080) 0:03:10.521 ****** 2025-09-19 17:03:50.547099 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.547110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.547116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.547122 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547128 | orchestrator | 2025-09-19 17:03:50.547134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 17:03:50.547140 | orchestrator | Friday 19 September 2025 16:55:58 +0000 (0:00:00.337) 0:03:10.859 ****** 2025-09-19 17:03:50.547147 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547153 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.547159 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.547165 | orchestrator | 2025-09-19 17:03:50.547171 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 17:03:50.547178 | orchestrator | Friday 19 September 2025 16:55:59 +0000 (0:00:00.338) 0:03:11.198 ****** 2025-09-19 17:03:50.547184 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547190 | orchestrator | 2025-09-19 17:03:50.547196 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 17:03:50.547202 | orchestrator | Friday 19 September 2025 16:55:59 +0000 (0:00:00.525) 0:03:11.723 ****** 2025-09-19 17:03:50.547209 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547215 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.547221 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.547227 | orchestrator | 2025-09-19 17:03:50.547233 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 17:03:50.547239 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:00.375) 0:03:12.098 ****** 2025-09-19 17:03:50.547246 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547252 | orchestrator | 2025-09-19 17:03:50.547258 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 17:03:50.547264 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:00.235) 0:03:12.334 ****** 2025-09-19 17:03:50.547270 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547276 | orchestrator | 2025-09-19 17:03:50.547283 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 17:03:50.547289 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:00.278) 0:03:12.613 ****** 2025-09-19 17:03:50.547295 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547301 | orchestrator | 2025-09-19 17:03:50.547307 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 17:03:50.547314 | orchestrator | Friday 19 September 2025 16:56:00 +0000 (0:00:00.186) 0:03:12.799 ****** 2025-09-19 17:03:50.547320 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547326 | orchestrator | 2025-09-19 17:03:50.547332 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 17:03:50.547338 | orchestrator | Friday 19 September 2025 16:56:01 +0000 (0:00:00.271) 0:03:13.071 ****** 2025-09-19 17:03:50.547345 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547351 | orchestrator | 2025-09-19 17:03:50.547357 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 17:03:50.547363 | orchestrator | Friday 19 September 2025 16:56:01 +0000 (0:00:00.314) 0:03:13.385 ****** 2025-09-19 17:03:50.547369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.547375 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.547382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.547388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547394 | orchestrator | 2025-09-19 17:03:50.547400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 17:03:50.547407 | orchestrator | Friday 19 September 2025 16:56:01 +0000 (0:00:00.419) 0:03:13.805 ****** 2025-09-19 17:03:50.547413 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547422 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.547435 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.547441 | orchestrator | 2025-09-19 17:03:50.547450 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 17:03:50.547457 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:00.543) 0:03:14.348 ****** 2025-09-19 17:03:50.547463 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547469 | orchestrator | 2025-09-19 17:03:50.547475 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 17:03:50.547481 | orchestrator | Friday 19 September 2025 16:56:02 +0000 (0:00:00.322) 0:03:14.671 ****** 2025-09-19 17:03:50.547487 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547494 | orchestrator | 2025-09-19 17:03:50.547500 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 17:03:50.547506 | orchestrator | Friday 19 September 2025 16:56:03 +0000 (0:00:00.257) 0:03:14.928 ****** 2025-09-19 17:03:50.547512 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.547518 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.547524 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.547531 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.547537 | orchestrator | 2025-09-19 17:03:50.547543 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 17:03:50.547549 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:01.078) 0:03:16.007 ****** 2025-09-19 17:03:50.547555 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.547562 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.547568 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.547574 | orchestrator | 2025-09-19 17:03:50.547580 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 17:03:50.547586 | orchestrator | Friday 19 September 2025 16:56:04 +0000 (0:00:00.562) 0:03:16.569 ****** 2025-09-19 17:03:50.547592 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.547599 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.547605 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.547611 | orchestrator | 2025-09-19 17:03:50.547617 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 17:03:50.547623 | orchestrator | Friday 19 September 2025 16:56:06 +0000 (0:00:01.341) 0:03:17.911 ****** 2025-09-19 17:03:50.547629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.547636 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.547642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.547648 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547654 | orchestrator | 2025-09-19 17:03:50.547660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 17:03:50.547666 | orchestrator | Friday 19 September 2025 16:56:06 +0000 (0:00:00.649) 0:03:18.560 ****** 2025-09-19 17:03:50.547672 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.547679 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.547685 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.547691 | orchestrator | 2025-09-19 17:03:50.547697 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 17:03:50.547703 | orchestrator | Friday 19 September 2025 16:56:07 +0000 (0:00:00.353) 0:03:18.914 ****** 2025-09-19 17:03:50.547709 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.547716 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.547722 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.547728 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.547734 | orchestrator | 2025-09-19 17:03:50.547740 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 17:03:50.547747 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:01.147) 0:03:20.062 ****** 2025-09-19 17:03:50.547753 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.547763 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.547769 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.547775 | orchestrator | 2025-09-19 17:03:50.547782 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 17:03:50.547788 | orchestrator | Friday 19 September 2025 16:56:08 +0000 (0:00:00.472) 0:03:20.534 ****** 2025-09-19 17:03:50.547794 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.547800 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.547806 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.547812 | orchestrator | 2025-09-19 17:03:50.547819 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 17:03:50.547825 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:01.665) 0:03:22.200 ****** 2025-09-19 17:03:50.547831 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.547837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.547843 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.547861 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547868 | orchestrator | 2025-09-19 17:03:50.547874 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 17:03:50.547880 | orchestrator | Friday 19 September 2025 16:56:10 +0000 (0:00:00.632) 0:03:22.832 ****** 2025-09-19 17:03:50.547886 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.547892 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.547899 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.547905 | orchestrator | 2025-09-19 17:03:50.547911 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-19 17:03:50.547917 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.362) 0:03:23.194 ****** 2025-09-19 17:03:50.547923 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547930 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.547936 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.547942 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.547948 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.547954 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.547960 | orchestrator | 2025-09-19 17:03:50.547966 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 17:03:50.547979 | orchestrator | Friday 19 September 2025 16:56:11 +0000 (0:00:00.617) 0:03:23.812 ****** 2025-09-19 17:03:50.547986 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.547992 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.547998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.548004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.548011 | orchestrator | 2025-09-19 17:03:50.548017 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 17:03:50.548023 | orchestrator | Friday 19 September 2025 16:56:12 +0000 (0:00:00.914) 0:03:24.727 ****** 2025-09-19 17:03:50.548029 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548035 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548041 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548048 | orchestrator | 2025-09-19 17:03:50.548054 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 17:03:50.548060 | orchestrator | Friday 19 September 2025 16:56:13 +0000 (0:00:00.315) 0:03:25.042 ****** 2025-09-19 17:03:50.548066 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.548072 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.548079 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.548085 | orchestrator | 2025-09-19 17:03:50.548091 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 17:03:50.548097 | orchestrator | Friday 19 September 2025 16:56:14 +0000 (0:00:01.395) 0:03:26.437 ****** 2025-09-19 17:03:50.548103 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 17:03:50.548114 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 17:03:50.548121 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 17:03:50.548127 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548133 | orchestrator | 2025-09-19 17:03:50.548139 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 17:03:50.548146 | orchestrator | Friday 19 September 2025 16:56:15 +0000 (0:00:00.491) 0:03:26.929 ****** 2025-09-19 17:03:50.548152 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548158 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548164 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548170 | orchestrator | 2025-09-19 17:03:50.548177 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-19 17:03:50.548183 | orchestrator | 2025-09-19 17:03:50.548189 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.548195 | orchestrator | Friday 19 September 2025 16:56:15 +0000 (0:00:00.588) 0:03:27.518 ****** 2025-09-19 17:03:50.548202 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.548208 | orchestrator | 2025-09-19 17:03:50.548214 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.548220 | orchestrator | Friday 19 September 2025 16:56:16 +0000 (0:00:00.643) 0:03:28.161 ****** 2025-09-19 17:03:50.548227 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.548233 | orchestrator | 2025-09-19 17:03:50.548239 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.548245 | orchestrator | Friday 19 September 2025 16:56:16 +0000 (0:00:00.490) 0:03:28.652 ****** 2025-09-19 17:03:50.548251 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548257 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548264 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548270 | orchestrator | 2025-09-19 17:03:50.548276 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.548282 | orchestrator | Friday 19 September 2025 16:56:17 +0000 (0:00:00.699) 0:03:29.351 ****** 2025-09-19 17:03:50.548288 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548294 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548301 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548307 | orchestrator | 2025-09-19 17:03:50.548313 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.548319 | orchestrator | Friday 19 September 2025 16:56:17 +0000 (0:00:00.316) 0:03:29.668 ****** 2025-09-19 17:03:50.548326 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548332 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548344 | orchestrator | 2025-09-19 17:03:50.548350 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.548356 | orchestrator | Friday 19 September 2025 16:56:18 +0000 (0:00:00.485) 0:03:30.154 ****** 2025-09-19 17:03:50.548363 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548369 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548381 | orchestrator | 2025-09-19 17:03:50.548389 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.548400 | orchestrator | Friday 19 September 2025 16:56:18 +0000 (0:00:00.264) 0:03:30.418 ****** 2025-09-19 17:03:50.548410 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548420 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548430 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548439 | orchestrator | 2025-09-19 17:03:50.548449 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.548460 | orchestrator | Friday 19 September 2025 16:56:19 +0000 (0:00:00.813) 0:03:31.232 ****** 2025-09-19 17:03:50.548476 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548487 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548498 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548508 | orchestrator | 2025-09-19 17:03:50.548516 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.548523 | orchestrator | Friday 19 September 2025 16:56:19 +0000 (0:00:00.364) 0:03:31.597 ****** 2025-09-19 17:03:50.548529 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548535 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548541 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548548 | orchestrator | 2025-09-19 17:03:50.548558 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.548568 | orchestrator | Friday 19 September 2025 16:56:20 +0000 (0:00:00.529) 0:03:32.126 ****** 2025-09-19 17:03:50.548575 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548581 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548587 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548593 | orchestrator | 2025-09-19 17:03:50.548599 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.548606 | orchestrator | Friday 19 September 2025 16:56:21 +0000 (0:00:01.144) 0:03:33.271 ****** 2025-09-19 17:03:50.548612 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548618 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548624 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548630 | orchestrator | 2025-09-19 17:03:50.548636 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.548642 | orchestrator | Friday 19 September 2025 16:56:22 +0000 (0:00:01.113) 0:03:34.384 ****** 2025-09-19 17:03:50.548648 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548654 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548661 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548667 | orchestrator | 2025-09-19 17:03:50.548673 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.548679 | orchestrator | Friday 19 September 2025 16:56:22 +0000 (0:00:00.355) 0:03:34.740 ****** 2025-09-19 17:03:50.548685 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548691 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548697 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548703 | orchestrator | 2025-09-19 17:03:50.548709 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.548715 | orchestrator | Friday 19 September 2025 16:56:23 +0000 (0:00:00.619) 0:03:35.360 ****** 2025-09-19 17:03:50.548722 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548728 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548734 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548740 | orchestrator | 2025-09-19 17:03:50.548746 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.548752 | orchestrator | Friday 19 September 2025 16:56:23 +0000 (0:00:00.346) 0:03:35.706 ****** 2025-09-19 17:03:50.548758 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548764 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548771 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548777 | orchestrator | 2025-09-19 17:03:50.548783 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.548789 | orchestrator | Friday 19 September 2025 16:56:24 +0000 (0:00:00.353) 0:03:36.059 ****** 2025-09-19 17:03:50.548795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548801 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548807 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548813 | orchestrator | 2025-09-19 17:03:50.548819 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.548826 | orchestrator | Friday 19 September 2025 16:56:24 +0000 (0:00:00.511) 0:03:36.571 ****** 2025-09-19 17:03:50.548832 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548845 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548881 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548887 | orchestrator | 2025-09-19 17:03:50.548893 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.548899 | orchestrator | Friday 19 September 2025 16:56:25 +0000 (0:00:01.006) 0:03:37.578 ****** 2025-09-19 17:03:50.548906 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.548912 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.548918 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.548924 | orchestrator | 2025-09-19 17:03:50.548930 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.548937 | orchestrator | Friday 19 September 2025 16:56:26 +0000 (0:00:00.421) 0:03:38.000 ****** 2025-09-19 17:03:50.548943 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548949 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548955 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548961 | orchestrator | 2025-09-19 17:03:50.548967 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.548974 | orchestrator | Friday 19 September 2025 16:56:26 +0000 (0:00:00.355) 0:03:38.355 ****** 2025-09-19 17:03:50.548980 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.548986 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.548992 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.548998 | orchestrator | 2025-09-19 17:03:50.549005 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.549011 | orchestrator | Friday 19 September 2025 16:56:26 +0000 (0:00:00.330) 0:03:38.685 ****** 2025-09-19 17:03:50.549017 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549023 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549029 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549035 | orchestrator | 2025-09-19 17:03:50.549042 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-19 17:03:50.549048 | orchestrator | Friday 19 September 2025 16:56:27 +0000 (0:00:00.652) 0:03:39.338 ****** 2025-09-19 17:03:50.549054 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549060 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549066 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549072 | orchestrator | 2025-09-19 17:03:50.549079 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-19 17:03:50.549085 | orchestrator | Friday 19 September 2025 16:56:28 +0000 (0:00:00.610) 0:03:39.948 ****** 2025-09-19 17:03:50.549091 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.549097 | orchestrator | 2025-09-19 17:03:50.549104 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-19 17:03:50.549110 | orchestrator | Friday 19 September 2025 16:56:28 +0000 (0:00:00.553) 0:03:40.502 ****** 2025-09-19 17:03:50.549116 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.549122 | orchestrator | 2025-09-19 17:03:50.549129 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-19 17:03:50.549139 | orchestrator | Friday 19 September 2025 16:56:28 +0000 (0:00:00.245) 0:03:40.747 ****** 2025-09-19 17:03:50.549145 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-19 17:03:50.549151 | orchestrator | 2025-09-19 17:03:50.549161 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-19 17:03:50.549167 | orchestrator | Friday 19 September 2025 16:56:30 +0000 (0:00:01.366) 0:03:42.114 ****** 2025-09-19 17:03:50.549174 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549180 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549186 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549192 | orchestrator | 2025-09-19 17:03:50.549198 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-19 17:03:50.549204 | orchestrator | Friday 19 September 2025 16:56:30 +0000 (0:00:00.742) 0:03:42.857 ****** 2025-09-19 17:03:50.549210 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549221 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549227 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549234 | orchestrator | 2025-09-19 17:03:50.549240 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-19 17:03:50.549246 | orchestrator | Friday 19 September 2025 16:56:31 +0000 (0:00:00.524) 0:03:43.381 ****** 2025-09-19 17:03:50.549252 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549258 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549265 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549271 | orchestrator | 2025-09-19 17:03:50.549277 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-19 17:03:50.549282 | orchestrator | Friday 19 September 2025 16:56:32 +0000 (0:00:01.212) 0:03:44.594 ****** 2025-09-19 17:03:50.549287 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549293 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549298 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549304 | orchestrator | 2025-09-19 17:03:50.549309 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-19 17:03:50.549315 | orchestrator | Friday 19 September 2025 16:56:33 +0000 (0:00:00.950) 0:03:45.545 ****** 2025-09-19 17:03:50.549320 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549325 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549331 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549336 | orchestrator | 2025-09-19 17:03:50.549342 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-19 17:03:50.549347 | orchestrator | Friday 19 September 2025 16:56:34 +0000 (0:00:00.657) 0:03:46.202 ****** 2025-09-19 17:03:50.549352 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549358 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549363 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549369 | orchestrator | 2025-09-19 17:03:50.549374 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-19 17:03:50.549380 | orchestrator | Friday 19 September 2025 16:56:34 +0000 (0:00:00.659) 0:03:46.861 ****** 2025-09-19 17:03:50.549385 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549390 | orchestrator | 2025-09-19 17:03:50.549396 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-19 17:03:50.549401 | orchestrator | Friday 19 September 2025 16:56:36 +0000 (0:00:01.230) 0:03:48.092 ****** 2025-09-19 17:03:50.549407 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549412 | orchestrator | 2025-09-19 17:03:50.549417 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-19 17:03:50.549423 | orchestrator | Friday 19 September 2025 16:56:36 +0000 (0:00:00.644) 0:03:48.736 ****** 2025-09-19 17:03:50.549428 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.549434 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.549439 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.549444 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:03:50.549450 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-19 17:03:50.549455 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:03:50.549461 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:03:50.549466 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-19 17:03:50.549471 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:03:50.549477 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-19 17:03:50.549482 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-19 17:03:50.549488 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-19 17:03:50.549493 | orchestrator | 2025-09-19 17:03:50.549498 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-19 17:03:50.549504 | orchestrator | Friday 19 September 2025 16:56:40 +0000 (0:00:04.074) 0:03:52.811 ****** 2025-09-19 17:03:50.549513 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549518 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549524 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549529 | orchestrator | 2025-09-19 17:03:50.549534 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-19 17:03:50.549540 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:01.395) 0:03:54.206 ****** 2025-09-19 17:03:50.549545 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549551 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549556 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549561 | orchestrator | 2025-09-19 17:03:50.549567 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-19 17:03:50.549572 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:00.299) 0:03:54.506 ****** 2025-09-19 17:03:50.549578 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549583 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.549588 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.549594 | orchestrator | 2025-09-19 17:03:50.549599 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-19 17:03:50.549604 | orchestrator | Friday 19 September 2025 16:56:42 +0000 (0:00:00.261) 0:03:54.768 ****** 2025-09-19 17:03:50.549610 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549615 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549621 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549626 | orchestrator | 2025-09-19 17:03:50.549634 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-19 17:03:50.549642 | orchestrator | Friday 19 September 2025 16:56:44 +0000 (0:00:01.556) 0:03:56.324 ****** 2025-09-19 17:03:50.549648 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549653 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549659 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549664 | orchestrator | 2025-09-19 17:03:50.549670 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-19 17:03:50.549675 | orchestrator | Friday 19 September 2025 16:56:45 +0000 (0:00:01.484) 0:03:57.809 ****** 2025-09-19 17:03:50.549680 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.549686 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.549691 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.549697 | orchestrator | 2025-09-19 17:03:50.549702 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-19 17:03:50.549708 | orchestrator | Friday 19 September 2025 16:56:46 +0000 (0:00:00.332) 0:03:58.141 ****** 2025-09-19 17:03:50.549713 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.549719 | orchestrator | 2025-09-19 17:03:50.549724 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-19 17:03:50.549729 | orchestrator | Friday 19 September 2025 16:56:46 +0000 (0:00:00.503) 0:03:58.645 ****** 2025-09-19 17:03:50.549735 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.549740 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.549746 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.549751 | orchestrator | 2025-09-19 17:03:50.549756 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-19 17:03:50.549762 | orchestrator | Friday 19 September 2025 16:56:47 +0000 (0:00:00.582) 0:03:59.227 ****** 2025-09-19 17:03:50.549767 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.549773 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.549778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.549784 | orchestrator | 2025-09-19 17:03:50.549789 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-19 17:03:50.549794 | orchestrator | Friday 19 September 2025 16:56:47 +0000 (0:00:00.358) 0:03:59.586 ****** 2025-09-19 17:03:50.549800 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.549809 | orchestrator | 2025-09-19 17:03:50.549814 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-19 17:03:50.549820 | orchestrator | Friday 19 September 2025 16:56:48 +0000 (0:00:00.586) 0:04:00.172 ****** 2025-09-19 17:03:50.549825 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549831 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549836 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549841 | orchestrator | 2025-09-19 17:03:50.549856 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-19 17:03:50.549862 | orchestrator | Friday 19 September 2025 16:56:50 +0000 (0:00:01.889) 0:04:02.062 ****** 2025-09-19 17:03:50.549868 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549873 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549878 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549884 | orchestrator | 2025-09-19 17:03:50.549889 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-19 17:03:50.549895 | orchestrator | Friday 19 September 2025 16:56:51 +0000 (0:00:01.566) 0:04:03.629 ****** 2025-09-19 17:03:50.549900 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549905 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549911 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549916 | orchestrator | 2025-09-19 17:03:50.549922 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-19 17:03:50.549927 | orchestrator | Friday 19 September 2025 16:56:53 +0000 (0:00:01.979) 0:04:05.608 ****** 2025-09-19 17:03:50.549932 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.549938 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.549943 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.549949 | orchestrator | 2025-09-19 17:03:50.549954 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-19 17:03:50.549960 | orchestrator | Friday 19 September 2025 16:56:55 +0000 (0:00:02.009) 0:04:07.617 ****** 2025-09-19 17:03:50.549965 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-19 17:03:50.549971 | orchestrator | 2025-09-19 17:03:50.549976 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-19 17:03:50.549981 | orchestrator | Friday 19 September 2025 16:56:56 +0000 (0:00:00.876) 0:04:08.494 ****** 2025-09-19 17:03:50.549987 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-19 17:03:50.549992 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.549998 | orchestrator | 2025-09-19 17:03:50.550003 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-19 17:03:50.550009 | orchestrator | Friday 19 September 2025 16:57:18 +0000 (0:00:22.097) 0:04:30.592 ****** 2025-09-19 17:03:50.550037 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550044 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550050 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550055 | orchestrator | 2025-09-19 17:03:50.550060 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-19 17:03:50.550066 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:08.794) 0:04:39.387 ****** 2025-09-19 17:03:50.550072 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550077 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550083 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550088 | orchestrator | 2025-09-19 17:03:50.550093 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-19 17:03:50.550099 | orchestrator | Friday 19 September 2025 16:57:27 +0000 (0:00:00.303) 0:04:39.690 ****** 2025-09-19 17:03:50.550111 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-19 17:03:50.550123 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-19 17:03:50.550131 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-19 17:03:50.550137 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-19 17:03:50.550144 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-19 17:03:50.550150 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__e24d5c52e36f3e1c9795b6c7441b404b93f53876'}])  2025-09-19 17:03:50.550156 | orchestrator | 2025-09-19 17:03:50.550162 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.550167 | orchestrator | Friday 19 September 2025 16:57:41 +0000 (0:00:14.050) 0:04:53.741 ****** 2025-09-19 17:03:50.550173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550178 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550189 | orchestrator | 2025-09-19 17:03:50.550195 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-19 17:03:50.550200 | orchestrator | Friday 19 September 2025 16:57:42 +0000 (0:00:00.341) 0:04:54.082 ****** 2025-09-19 17:03:50.550206 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-2, testbed-node-1 2025-09-19 17:03:50.550211 | orchestrator | 2025-09-19 17:03:50.550217 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-19 17:03:50.550222 | orchestrator | Friday 19 September 2025 16:57:42 +0000 (0:00:00.552) 0:04:54.635 ****** 2025-09-19 17:03:50.550227 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550233 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550238 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550244 | orchestrator | 2025-09-19 17:03:50.550249 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-19 17:03:50.550255 | orchestrator | Friday 19 September 2025 16:57:43 +0000 (0:00:00.656) 0:04:55.291 ****** 2025-09-19 17:03:50.550260 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550266 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550271 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550276 | orchestrator | 2025-09-19 17:03:50.550282 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-19 17:03:50.550293 | orchestrator | Friday 19 September 2025 16:57:43 +0000 (0:00:00.350) 0:04:55.642 ****** 2025-09-19 17:03:50.550299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 17:03:50.550304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 17:03:50.550310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 17:03:50.550315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550320 | orchestrator | 2025-09-19 17:03:50.550326 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-19 17:03:50.550331 | orchestrator | Friday 19 September 2025 16:57:44 +0000 (0:00:00.569) 0:04:56.211 ****** 2025-09-19 17:03:50.550337 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550342 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550348 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550353 | orchestrator | 2025-09-19 17:03:50.550367 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-19 17:03:50.550373 | orchestrator | 2025-09-19 17:03:50.550381 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.550387 | orchestrator | Friday 19 September 2025 16:57:45 +0000 (0:00:00.817) 0:04:57.029 ****** 2025-09-19 17:03:50.550392 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.550398 | orchestrator | 2025-09-19 17:03:50.550403 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.550409 | orchestrator | Friday 19 September 2025 16:57:45 +0000 (0:00:00.499) 0:04:57.528 ****** 2025-09-19 17:03:50.550414 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1, testbed-node-0, testbed-node-2 2025-09-19 17:03:50.550420 | orchestrator | 2025-09-19 17:03:50.550425 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.550431 | orchestrator | Friday 19 September 2025 16:57:46 +0000 (0:00:00.565) 0:04:58.094 ****** 2025-09-19 17:03:50.550436 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550442 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550447 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550452 | orchestrator | 2025-09-19 17:03:50.550458 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.550463 | orchestrator | Friday 19 September 2025 16:57:47 +0000 (0:00:00.955) 0:04:59.050 ****** 2025-09-19 17:03:50.550469 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550480 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550485 | orchestrator | 2025-09-19 17:03:50.550490 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.550496 | orchestrator | Friday 19 September 2025 16:57:47 +0000 (0:00:00.295) 0:04:59.345 ****** 2025-09-19 17:03:50.550501 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550507 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550517 | orchestrator | 2025-09-19 17:03:50.550523 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.550528 | orchestrator | Friday 19 September 2025 16:57:47 +0000 (0:00:00.320) 0:04:59.665 ****** 2025-09-19 17:03:50.550534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550539 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550545 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550550 | orchestrator | 2025-09-19 17:03:50.550555 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.550561 | orchestrator | Friday 19 September 2025 16:57:48 +0000 (0:00:00.298) 0:04:59.964 ****** 2025-09-19 17:03:50.550566 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550572 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550580 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550586 | orchestrator | 2025-09-19 17:03:50.550591 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.550597 | orchestrator | Friday 19 September 2025 16:57:49 +0000 (0:00:00.959) 0:05:00.923 ****** 2025-09-19 17:03:50.550602 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550608 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550613 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550619 | orchestrator | 2025-09-19 17:03:50.550624 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.550630 | orchestrator | Friday 19 September 2025 16:57:49 +0000 (0:00:00.335) 0:05:01.259 ****** 2025-09-19 17:03:50.550635 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550646 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550651 | orchestrator | 2025-09-19 17:03:50.550657 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.550662 | orchestrator | Friday 19 September 2025 16:57:49 +0000 (0:00:00.294) 0:05:01.554 ****** 2025-09-19 17:03:50.550668 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550673 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550678 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550684 | orchestrator | 2025-09-19 17:03:50.550689 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.550695 | orchestrator | Friday 19 September 2025 16:57:50 +0000 (0:00:00.681) 0:05:02.236 ****** 2025-09-19 17:03:50.550700 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550706 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550711 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550716 | orchestrator | 2025-09-19 17:03:50.550722 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.550727 | orchestrator | Friday 19 September 2025 16:57:51 +0000 (0:00:01.265) 0:05:03.501 ****** 2025-09-19 17:03:50.550733 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550738 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550743 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550749 | orchestrator | 2025-09-19 17:03:50.550754 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.550760 | orchestrator | Friday 19 September 2025 16:57:51 +0000 (0:00:00.298) 0:05:03.799 ****** 2025-09-19 17:03:50.550765 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550770 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550776 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550781 | orchestrator | 2025-09-19 17:03:50.550787 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.550792 | orchestrator | Friday 19 September 2025 16:57:52 +0000 (0:00:00.328) 0:05:04.128 ****** 2025-09-19 17:03:50.550798 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550803 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550808 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550814 | orchestrator | 2025-09-19 17:03:50.550819 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.550825 | orchestrator | Friday 19 September 2025 16:57:52 +0000 (0:00:00.292) 0:05:04.421 ****** 2025-09-19 17:03:50.550830 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550836 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550844 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550861 | orchestrator | 2025-09-19 17:03:50.550870 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.550875 | orchestrator | Friday 19 September 2025 16:57:53 +0000 (0:00:00.538) 0:05:04.960 ****** 2025-09-19 17:03:50.550881 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550886 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550892 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550897 | orchestrator | 2025-09-19 17:03:50.550906 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.550912 | orchestrator | Friday 19 September 2025 16:57:53 +0000 (0:00:00.410) 0:05:05.370 ****** 2025-09-19 17:03:50.550917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550923 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550928 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550933 | orchestrator | 2025-09-19 17:03:50.550939 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.550944 | orchestrator | Friday 19 September 2025 16:57:53 +0000 (0:00:00.376) 0:05:05.746 ****** 2025-09-19 17:03:50.550950 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.550955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.550961 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.550966 | orchestrator | 2025-09-19 17:03:50.550971 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.550977 | orchestrator | Friday 19 September 2025 16:57:54 +0000 (0:00:00.316) 0:05:06.062 ****** 2025-09-19 17:03:50.550982 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.550988 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.550993 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.550998 | orchestrator | 2025-09-19 17:03:50.551004 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.551009 | orchestrator | Friday 19 September 2025 16:57:54 +0000 (0:00:00.362) 0:05:06.424 ****** 2025-09-19 17:03:50.551015 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.551020 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.551025 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551031 | orchestrator | 2025-09-19 17:03:50.551036 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.551042 | orchestrator | Friday 19 September 2025 16:57:55 +0000 (0:00:00.562) 0:05:06.987 ****** 2025-09-19 17:03:50.551047 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.551053 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551058 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.551063 | orchestrator | 2025-09-19 17:03:50.551069 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-19 17:03:50.551074 | orchestrator | Friday 19 September 2025 16:57:55 +0000 (0:00:00.709) 0:05:07.697 ****** 2025-09-19 17:03:50.551080 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 17:03:50.551085 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.551091 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.551096 | orchestrator | 2025-09-19 17:03:50.551102 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-19 17:03:50.551107 | orchestrator | Friday 19 September 2025 16:57:56 +0000 (0:00:00.913) 0:05:08.610 ****** 2025-09-19 17:03:50.551112 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.551118 | orchestrator | 2025-09-19 17:03:50.551123 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-19 17:03:50.551129 | orchestrator | Friday 19 September 2025 16:57:57 +0000 (0:00:00.745) 0:05:09.356 ****** 2025-09-19 17:03:50.551134 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551139 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551145 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551150 | orchestrator | 2025-09-19 17:03:50.551156 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-19 17:03:50.551161 | orchestrator | Friday 19 September 2025 16:57:58 +0000 (0:00:00.691) 0:05:10.047 ****** 2025-09-19 17:03:50.551167 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551172 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551178 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.551183 | orchestrator | 2025-09-19 17:03:50.551188 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-19 17:03:50.551198 | orchestrator | Friday 19 September 2025 16:57:58 +0000 (0:00:00.320) 0:05:10.368 ****** 2025-09-19 17:03:50.551204 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.551209 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.551215 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.551220 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-19 17:03:50.551226 | orchestrator | 2025-09-19 17:03:50.551231 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-19 17:03:50.551236 | orchestrator | Friday 19 September 2025 16:58:08 +0000 (0:00:09.762) 0:05:20.131 ****** 2025-09-19 17:03:50.551242 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.551247 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.551253 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551258 | orchestrator | 2025-09-19 17:03:50.551263 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-19 17:03:50.551269 | orchestrator | Friday 19 September 2025 16:58:08 +0000 (0:00:00.596) 0:05:20.727 ****** 2025-09-19 17:03:50.551274 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 17:03:50.551280 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 17:03:50.551285 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 17:03:50.551291 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.551296 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.551301 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.551307 | orchestrator | 2025-09-19 17:03:50.551315 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-19 17:03:50.551324 | orchestrator | Friday 19 September 2025 16:58:11 +0000 (0:00:02.204) 0:05:22.932 ****** 2025-09-19 17:03:50.551329 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 17:03:50.551335 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 17:03:50.551340 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 17:03:50.551346 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:03:50.551351 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-19 17:03:50.551357 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-19 17:03:50.551362 | orchestrator | 2025-09-19 17:03:50.551368 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-19 17:03:50.551373 | orchestrator | Friday 19 September 2025 16:58:12 +0000 (0:00:01.321) 0:05:24.254 ****** 2025-09-19 17:03:50.551379 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.551384 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.551390 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551395 | orchestrator | 2025-09-19 17:03:50.551400 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-19 17:03:50.551406 | orchestrator | Friday 19 September 2025 16:58:13 +0000 (0:00:00.699) 0:05:24.953 ****** 2025-09-19 17:03:50.551411 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551417 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551422 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.551427 | orchestrator | 2025-09-19 17:03:50.551433 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-19 17:03:50.551438 | orchestrator | Friday 19 September 2025 16:58:13 +0000 (0:00:00.289) 0:05:25.242 ****** 2025-09-19 17:03:50.551444 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551449 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551454 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.551460 | orchestrator | 2025-09-19 17:03:50.551465 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-19 17:03:50.551471 | orchestrator | Friday 19 September 2025 16:58:13 +0000 (0:00:00.532) 0:05:25.775 ****** 2025-09-19 17:03:50.551480 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.551485 | orchestrator | 2025-09-19 17:03:50.551491 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-19 17:03:50.551496 | orchestrator | Friday 19 September 2025 16:58:14 +0000 (0:00:00.524) 0:05:26.300 ****** 2025-09-19 17:03:50.551501 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551507 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551512 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.551518 | orchestrator | 2025-09-19 17:03:50.551523 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-19 17:03:50.551528 | orchestrator | Friday 19 September 2025 16:58:14 +0000 (0:00:00.353) 0:05:26.653 ****** 2025-09-19 17:03:50.551534 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551539 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551545 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.551550 | orchestrator | 2025-09-19 17:03:50.551556 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-19 17:03:50.551561 | orchestrator | Friday 19 September 2025 16:58:15 +0000 (0:00:00.564) 0:05:27.218 ****** 2025-09-19 17:03:50.551567 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.551572 | orchestrator | 2025-09-19 17:03:50.551577 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-19 17:03:50.551583 | orchestrator | Friday 19 September 2025 16:58:15 +0000 (0:00:00.551) 0:05:27.769 ****** 2025-09-19 17:03:50.551588 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551594 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551599 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551604 | orchestrator | 2025-09-19 17:03:50.551610 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-19 17:03:50.551615 | orchestrator | Friday 19 September 2025 16:58:17 +0000 (0:00:01.398) 0:05:29.168 ****** 2025-09-19 17:03:50.551621 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551626 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551631 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551637 | orchestrator | 2025-09-19 17:03:50.551642 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-19 17:03:50.551648 | orchestrator | Friday 19 September 2025 16:58:18 +0000 (0:00:01.405) 0:05:30.574 ****** 2025-09-19 17:03:50.551653 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551658 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551664 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551669 | orchestrator | 2025-09-19 17:03:50.551675 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-19 17:03:50.551680 | orchestrator | Friday 19 September 2025 16:58:20 +0000 (0:00:01.818) 0:05:32.392 ****** 2025-09-19 17:03:50.551685 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551691 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551696 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551702 | orchestrator | 2025-09-19 17:03:50.551707 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-19 17:03:50.551713 | orchestrator | Friday 19 September 2025 16:58:22 +0000 (0:00:01.917) 0:05:34.310 ****** 2025-09-19 17:03:50.551718 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.551723 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.551729 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-19 17:03:50.551734 | orchestrator | 2025-09-19 17:03:50.551740 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-19 17:03:50.551745 | orchestrator | Friday 19 September 2025 16:58:22 +0000 (0:00:00.402) 0:05:34.712 ****** 2025-09-19 17:03:50.551750 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-19 17:03:50.551762 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-19 17:03:50.551776 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-19 17:03:50.551781 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-19 17:03:50.551787 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-19 17:03:50.551792 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-19 17:03:50.551798 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.551803 | orchestrator | 2025-09-19 17:03:50.551809 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-19 17:03:50.551814 | orchestrator | Friday 19 September 2025 16:58:59 +0000 (0:00:36.772) 0:06:11.485 ****** 2025-09-19 17:03:50.551820 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.551825 | orchestrator | 2025-09-19 17:03:50.551831 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-19 17:03:50.551836 | orchestrator | Friday 19 September 2025 16:59:01 +0000 (0:00:01.950) 0:06:13.435 ****** 2025-09-19 17:03:50.551841 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551858 | orchestrator | 2025-09-19 17:03:50.551863 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-19 17:03:50.551869 | orchestrator | Friday 19 September 2025 16:59:01 +0000 (0:00:00.349) 0:06:13.785 ****** 2025-09-19 17:03:50.551874 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.551880 | orchestrator | 2025-09-19 17:03:50.551885 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-19 17:03:50.551891 | orchestrator | Friday 19 September 2025 16:59:02 +0000 (0:00:00.179) 0:06:13.965 ****** 2025-09-19 17:03:50.551896 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-19 17:03:50.551902 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-19 17:03:50.551907 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-19 17:03:50.551912 | orchestrator | 2025-09-19 17:03:50.551918 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-19 17:03:50.551923 | orchestrator | Friday 19 September 2025 16:59:08 +0000 (0:00:06.435) 0:06:20.400 ****** 2025-09-19 17:03:50.551929 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-19 17:03:50.551934 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-19 17:03:50.551940 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-19 17:03:50.551945 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-19 17:03:50.551950 | orchestrator | 2025-09-19 17:03:50.551956 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.551961 | orchestrator | Friday 19 September 2025 16:59:13 +0000 (0:00:05.114) 0:06:25.515 ****** 2025-09-19 17:03:50.551967 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.551972 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.551978 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.551983 | orchestrator | 2025-09-19 17:03:50.551988 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-19 17:03:50.551994 | orchestrator | Friday 19 September 2025 16:59:14 +0000 (0:00:00.969) 0:06:26.484 ****** 2025-09-19 17:03:50.551999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.552005 | orchestrator | 2025-09-19 17:03:50.552010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-19 17:03:50.552016 | orchestrator | Friday 19 September 2025 16:59:15 +0000 (0:00:00.507) 0:06:26.992 ****** 2025-09-19 17:03:50.552025 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.552030 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.552036 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.552041 | orchestrator | 2025-09-19 17:03:50.552047 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-19 17:03:50.552052 | orchestrator | Friday 19 September 2025 16:59:15 +0000 (0:00:00.325) 0:06:27.318 ****** 2025-09-19 17:03:50.552058 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.552063 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.552068 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.552074 | orchestrator | 2025-09-19 17:03:50.552079 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-19 17:03:50.552085 | orchestrator | Friday 19 September 2025 16:59:17 +0000 (0:00:01.889) 0:06:29.207 ****** 2025-09-19 17:03:50.552090 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-19 17:03:50.552095 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-19 17:03:50.552101 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-19 17:03:50.552106 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.552112 | orchestrator | 2025-09-19 17:03:50.552117 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-19 17:03:50.552122 | orchestrator | Friday 19 September 2025 16:59:17 +0000 (0:00:00.629) 0:06:29.836 ****** 2025-09-19 17:03:50.552128 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.552133 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.552139 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.552144 | orchestrator | 2025-09-19 17:03:50.552150 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-19 17:03:50.552155 | orchestrator | 2025-09-19 17:03:50.552160 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.552166 | orchestrator | Friday 19 September 2025 16:59:18 +0000 (0:00:00.563) 0:06:30.400 ****** 2025-09-19 17:03:50.552177 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.552183 | orchestrator | 2025-09-19 17:03:50.552189 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.552194 | orchestrator | Friday 19 September 2025 16:59:19 +0000 (0:00:00.695) 0:06:31.096 ****** 2025-09-19 17:03:50.552199 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.552205 | orchestrator | 2025-09-19 17:03:50.552210 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.552216 | orchestrator | Friday 19 September 2025 16:59:19 +0000 (0:00:00.529) 0:06:31.625 ****** 2025-09-19 17:03:50.552221 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552227 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552232 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552237 | orchestrator | 2025-09-19 17:03:50.552243 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.552248 | orchestrator | Friday 19 September 2025 16:59:20 +0000 (0:00:00.296) 0:06:31.922 ****** 2025-09-19 17:03:50.552254 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552259 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552264 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552270 | orchestrator | 2025-09-19 17:03:50.552275 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.552281 | orchestrator | Friday 19 September 2025 16:59:21 +0000 (0:00:00.984) 0:06:32.907 ****** 2025-09-19 17:03:50.552286 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552292 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552297 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552302 | orchestrator | 2025-09-19 17:03:50.552308 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.552316 | orchestrator | Friday 19 September 2025 16:59:21 +0000 (0:00:00.746) 0:06:33.654 ****** 2025-09-19 17:03:50.552322 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552327 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552333 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552338 | orchestrator | 2025-09-19 17:03:50.552343 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.552349 | orchestrator | Friday 19 September 2025 16:59:22 +0000 (0:00:00.737) 0:06:34.391 ****** 2025-09-19 17:03:50.552354 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552360 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552365 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552370 | orchestrator | 2025-09-19 17:03:50.552376 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.552381 | orchestrator | Friday 19 September 2025 16:59:22 +0000 (0:00:00.368) 0:06:34.760 ****** 2025-09-19 17:03:50.552387 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552392 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552397 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552403 | orchestrator | 2025-09-19 17:03:50.552408 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.552414 | orchestrator | Friday 19 September 2025 16:59:23 +0000 (0:00:00.590) 0:06:35.350 ****** 2025-09-19 17:03:50.552419 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552425 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552430 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552435 | orchestrator | 2025-09-19 17:03:50.552441 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.552446 | orchestrator | Friday 19 September 2025 16:59:23 +0000 (0:00:00.309) 0:06:35.660 ****** 2025-09-19 17:03:50.552451 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552457 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552462 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552468 | orchestrator | 2025-09-19 17:03:50.552473 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.552479 | orchestrator | Friday 19 September 2025 16:59:24 +0000 (0:00:00.743) 0:06:36.404 ****** 2025-09-19 17:03:50.552484 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552490 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552495 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552500 | orchestrator | 2025-09-19 17:03:50.552506 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.552511 | orchestrator | Friday 19 September 2025 16:59:25 +0000 (0:00:00.761) 0:06:37.166 ****** 2025-09-19 17:03:50.552517 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552522 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552527 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552533 | orchestrator | 2025-09-19 17:03:50.552538 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.552544 | orchestrator | Friday 19 September 2025 16:59:25 +0000 (0:00:00.511) 0:06:37.677 ****** 2025-09-19 17:03:50.552549 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552555 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552560 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552565 | orchestrator | 2025-09-19 17:03:50.552571 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.552576 | orchestrator | Friday 19 September 2025 16:59:26 +0000 (0:00:00.334) 0:06:38.011 ****** 2025-09-19 17:03:50.552582 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552587 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552592 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552598 | orchestrator | 2025-09-19 17:03:50.552603 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.552609 | orchestrator | Friday 19 September 2025 16:59:26 +0000 (0:00:00.341) 0:06:38.353 ****** 2025-09-19 17:03:50.552618 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552623 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552629 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552634 | orchestrator | 2025-09-19 17:03:50.552640 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.552645 | orchestrator | Friday 19 September 2025 16:59:26 +0000 (0:00:00.334) 0:06:38.687 ****** 2025-09-19 17:03:50.552650 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552656 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552665 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552670 | orchestrator | 2025-09-19 17:03:50.552676 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.552681 | orchestrator | Friday 19 September 2025 16:59:27 +0000 (0:00:00.594) 0:06:39.282 ****** 2025-09-19 17:03:50.552687 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552692 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552697 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552703 | orchestrator | 2025-09-19 17:03:50.552708 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.552714 | orchestrator | Friday 19 September 2025 16:59:27 +0000 (0:00:00.331) 0:06:39.614 ****** 2025-09-19 17:03:50.552719 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552725 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552730 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552735 | orchestrator | 2025-09-19 17:03:50.552741 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.552746 | orchestrator | Friday 19 September 2025 16:59:28 +0000 (0:00:00.328) 0:06:39.943 ****** 2025-09-19 17:03:50.552752 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552757 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552762 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552768 | orchestrator | 2025-09-19 17:03:50.552773 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.552779 | orchestrator | Friday 19 September 2025 16:59:28 +0000 (0:00:00.298) 0:06:40.241 ****** 2025-09-19 17:03:50.552784 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552790 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552795 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552800 | orchestrator | 2025-09-19 17:03:50.552806 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.552811 | orchestrator | Friday 19 September 2025 16:59:28 +0000 (0:00:00.565) 0:06:40.806 ****** 2025-09-19 17:03:50.552817 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552822 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552828 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552833 | orchestrator | 2025-09-19 17:03:50.552838 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-19 17:03:50.552844 | orchestrator | Friday 19 September 2025 16:59:29 +0000 (0:00:00.532) 0:06:41.338 ****** 2025-09-19 17:03:50.552875 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.552881 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.552886 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.552892 | orchestrator | 2025-09-19 17:03:50.552897 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-19 17:03:50.552903 | orchestrator | Friday 19 September 2025 16:59:29 +0000 (0:00:00.320) 0:06:41.659 ****** 2025-09-19 17:03:50.552908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:03:50.552914 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:03:50.552919 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:03:50.552924 | orchestrator | 2025-09-19 17:03:50.552930 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-19 17:03:50.552935 | orchestrator | Friday 19 September 2025 16:59:30 +0000 (0:00:00.900) 0:06:42.559 ****** 2025-09-19 17:03:50.552945 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.552950 | orchestrator | 2025-09-19 17:03:50.552955 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-19 17:03:50.552961 | orchestrator | Friday 19 September 2025 16:59:31 +0000 (0:00:00.786) 0:06:43.346 ****** 2025-09-19 17:03:50.552966 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.552972 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.552977 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.552982 | orchestrator | 2025-09-19 17:03:50.553030 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-19 17:03:50.553044 | orchestrator | Friday 19 September 2025 16:59:31 +0000 (0:00:00.329) 0:06:43.676 ****** 2025-09-19 17:03:50.553049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553055 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553060 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.553065 | orchestrator | 2025-09-19 17:03:50.553071 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-19 17:03:50.553076 | orchestrator | Friday 19 September 2025 16:59:32 +0000 (0:00:00.298) 0:06:43.974 ****** 2025-09-19 17:03:50.553081 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.553087 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.553092 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.553097 | orchestrator | 2025-09-19 17:03:50.553103 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-19 17:03:50.553108 | orchestrator | Friday 19 September 2025 16:59:33 +0000 (0:00:00.936) 0:06:44.911 ****** 2025-09-19 17:03:50.553113 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.553119 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.553124 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.553129 | orchestrator | 2025-09-19 17:03:50.553134 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-19 17:03:50.553140 | orchestrator | Friday 19 September 2025 16:59:33 +0000 (0:00:00.363) 0:06:45.275 ****** 2025-09-19 17:03:50.553145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 17:03:50.553150 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 17:03:50.553156 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 17:03:50.553161 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 17:03:50.553171 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-19 17:03:50.553179 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 17:03:50.553185 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 17:03:50.553190 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-19 17:03:50.553195 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 17:03:50.553200 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 17:03:50.553205 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-19 17:03:50.553210 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 17:03:50.553214 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-19 17:03:50.553219 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 17:03:50.553224 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-19 17:03:50.553232 | orchestrator | 2025-09-19 17:03:50.553237 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-19 17:03:50.553242 | orchestrator | Friday 19 September 2025 16:59:37 +0000 (0:00:04.151) 0:06:49.426 ****** 2025-09-19 17:03:50.553247 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553252 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553256 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.553261 | orchestrator | 2025-09-19 17:03:50.553266 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-19 17:03:50.553271 | orchestrator | Friday 19 September 2025 16:59:37 +0000 (0:00:00.335) 0:06:49.762 ****** 2025-09-19 17:03:50.553276 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.553280 | orchestrator | 2025-09-19 17:03:50.553285 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-19 17:03:50.553290 | orchestrator | Friday 19 September 2025 16:59:38 +0000 (0:00:00.787) 0:06:50.550 ****** 2025-09-19 17:03:50.553295 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 17:03:50.553300 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 17:03:50.553304 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-19 17:03:50.553309 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-19 17:03:50.553314 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-19 17:03:50.553319 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-19 17:03:50.553324 | orchestrator | 2025-09-19 17:03:50.553329 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-19 17:03:50.553333 | orchestrator | Friday 19 September 2025 16:59:39 +0000 (0:00:01.128) 0:06:51.678 ****** 2025-09-19 17:03:50.553338 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.553343 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.553348 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.553353 | orchestrator | 2025-09-19 17:03:50.553357 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-19 17:03:50.553362 | orchestrator | Friday 19 September 2025 16:59:42 +0000 (0:00:02.195) 0:06:53.873 ****** 2025-09-19 17:03:50.553367 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:03:50.553372 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.553377 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.553381 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:03:50.553386 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 17:03:50.553391 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.553396 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:03:50.553400 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 17:03:50.553405 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.553410 | orchestrator | 2025-09-19 17:03:50.553415 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-19 17:03:50.553419 | orchestrator | Friday 19 September 2025 16:59:43 +0000 (0:00:01.233) 0:06:55.107 ****** 2025-09-19 17:03:50.553424 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.553429 | orchestrator | 2025-09-19 17:03:50.553434 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-19 17:03:50.553439 | orchestrator | Friday 19 September 2025 16:59:46 +0000 (0:00:02.775) 0:06:57.883 ****** 2025-09-19 17:03:50.553443 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.553448 | orchestrator | 2025-09-19 17:03:50.553453 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-19 17:03:50.553458 | orchestrator | Friday 19 September 2025 16:59:46 +0000 (0:00:00.533) 0:06:58.416 ****** 2025-09-19 17:03:50.553466 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-502e1679-2b8a-59ad-b2cc-f53252d80a70', 'data_vg': 'ceph-502e1679-2b8a-59ad-b2cc-f53252d80a70'}) 2025-09-19 17:03:50.553471 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2', 'data_vg': 'ceph-6bee08d2-4d0c-5efd-9bb6-6357ac0256e2'}) 2025-09-19 17:03:50.553479 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-4de995f9-e371-53ec-a5e6-95298d442fa2', 'data_vg': 'ceph-4de995f9-e371-53ec-a5e6-95298d442fa2'}) 2025-09-19 17:03:50.553487 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-189b9442-6cba-5a76-9378-3098f039bcec', 'data_vg': 'ceph-189b9442-6cba-5a76-9378-3098f039bcec'}) 2025-09-19 17:03:50.553492 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-ea687e85-c7c1-53f3-8dfd-7d637eed1a38', 'data_vg': 'ceph-ea687e85-c7c1-53f3-8dfd-7d637eed1a38'}) 2025-09-19 17:03:50.553497 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7', 'data_vg': 'ceph-c5ef3a10-bb06-5cc2-b298-3a565f19d9a7'}) 2025-09-19 17:03:50.553501 | orchestrator | 2025-09-19 17:03:50.553506 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-19 17:03:50.553511 | orchestrator | Friday 19 September 2025 17:00:26 +0000 (0:00:39.766) 0:07:38.183 ****** 2025-09-19 17:03:50.553516 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553521 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553525 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.553530 | orchestrator | 2025-09-19 17:03:50.553535 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-19 17:03:50.553540 | orchestrator | Friday 19 September 2025 17:00:26 +0000 (0:00:00.550) 0:07:38.734 ****** 2025-09-19 17:03:50.553545 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.553549 | orchestrator | 2025-09-19 17:03:50.553554 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-19 17:03:50.553559 | orchestrator | Friday 19 September 2025 17:00:27 +0000 (0:00:00.544) 0:07:39.279 ****** 2025-09-19 17:03:50.553564 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.553568 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.553573 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.553578 | orchestrator | 2025-09-19 17:03:50.553583 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-19 17:03:50.553588 | orchestrator | Friday 19 September 2025 17:00:28 +0000 (0:00:00.670) 0:07:39.949 ****** 2025-09-19 17:03:50.553592 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.553597 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.553602 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.553607 | orchestrator | 2025-09-19 17:03:50.553611 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-19 17:03:50.553616 | orchestrator | Friday 19 September 2025 17:00:31 +0000 (0:00:02.937) 0:07:42.887 ****** 2025-09-19 17:03:50.553621 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.553626 | orchestrator | 2025-09-19 17:03:50.553630 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-19 17:03:50.553635 | orchestrator | Friday 19 September 2025 17:00:31 +0000 (0:00:00.539) 0:07:43.426 ****** 2025-09-19 17:03:50.553640 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.553645 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.553650 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.553654 | orchestrator | 2025-09-19 17:03:50.553659 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-19 17:03:50.553664 | orchestrator | Friday 19 September 2025 17:00:32 +0000 (0:00:01.168) 0:07:44.595 ****** 2025-09-19 17:03:50.553669 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.553674 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.553681 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.553686 | orchestrator | 2025-09-19 17:03:50.553691 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-19 17:03:50.553696 | orchestrator | Friday 19 September 2025 17:00:34 +0000 (0:00:01.457) 0:07:46.052 ****** 2025-09-19 17:03:50.553701 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.553705 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.553710 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.553715 | orchestrator | 2025-09-19 17:03:50.553720 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-19 17:03:50.553724 | orchestrator | Friday 19 September 2025 17:00:35 +0000 (0:00:01.660) 0:07:47.713 ****** 2025-09-19 17:03:50.553729 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553734 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553739 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.553743 | orchestrator | 2025-09-19 17:03:50.553748 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-19 17:03:50.553753 | orchestrator | Friday 19 September 2025 17:00:36 +0000 (0:00:00.387) 0:07:48.100 ****** 2025-09-19 17:03:50.553758 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553762 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553767 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.553772 | orchestrator | 2025-09-19 17:03:50.553777 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-19 17:03:50.553781 | orchestrator | Friday 19 September 2025 17:00:36 +0000 (0:00:00.367) 0:07:48.468 ****** 2025-09-19 17:03:50.553786 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-09-19 17:03:50.553791 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 17:03:50.553796 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-09-19 17:03:50.553800 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-09-19 17:03:50.553805 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-09-19 17:03:50.553810 | orchestrator | ok: [testbed-node-5] => (item=5) 2025-09-19 17:03:50.553815 | orchestrator | 2025-09-19 17:03:50.553819 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-19 17:03:50.553824 | orchestrator | Friday 19 September 2025 17:00:37 +0000 (0:00:01.324) 0:07:49.793 ****** 2025-09-19 17:03:50.553829 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 17:03:50.553834 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-19 17:03:50.553838 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 17:03:50.553843 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-19 17:03:50.553891 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-19 17:03:50.553896 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-19 17:03:50.553900 | orchestrator | 2025-09-19 17:03:50.553908 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-19 17:03:50.553913 | orchestrator | Friday 19 September 2025 17:00:40 +0000 (0:00:02.464) 0:07:52.257 ****** 2025-09-19 17:03:50.553918 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-09-19 17:03:50.553922 | orchestrator | changed: [testbed-node-4] => (item=0) 2025-09-19 17:03:50.553927 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-09-19 17:03:50.553932 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-09-19 17:03:50.553937 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-09-19 17:03:50.553941 | orchestrator | changed: [testbed-node-5] => (item=5) 2025-09-19 17:03:50.553946 | orchestrator | 2025-09-19 17:03:50.553951 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-19 17:03:50.553956 | orchestrator | Friday 19 September 2025 17:00:44 +0000 (0:00:03.687) 0:07:55.945 ****** 2025-09-19 17:03:50.553961 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553965 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.553970 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.553975 | orchestrator | 2025-09-19 17:03:50.553980 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-19 17:03:50.553988 | orchestrator | Friday 19 September 2025 17:00:46 +0000 (0:00:02.475) 0:07:58.421 ****** 2025-09-19 17:03:50.553992 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.553997 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554002 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-19 17:03:50.554007 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.554012 | orchestrator | 2025-09-19 17:03:50.554039 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-19 17:03:50.554044 | orchestrator | Friday 19 September 2025 17:00:59 +0000 (0:00:12.803) 0:08:11.224 ****** 2025-09-19 17:03:50.554049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554058 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554063 | orchestrator | 2025-09-19 17:03:50.554068 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.554073 | orchestrator | Friday 19 September 2025 17:01:00 +0000 (0:00:00.839) 0:08:12.063 ****** 2025-09-19 17:03:50.554078 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554083 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554087 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554092 | orchestrator | 2025-09-19 17:03:50.554097 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-19 17:03:50.554102 | orchestrator | Friday 19 September 2025 17:01:00 +0000 (0:00:00.633) 0:08:12.697 ****** 2025-09-19 17:03:50.554107 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.554112 | orchestrator | 2025-09-19 17:03:50.554116 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-19 17:03:50.554121 | orchestrator | Friday 19 September 2025 17:01:01 +0000 (0:00:00.540) 0:08:13.237 ****** 2025-09-19 17:03:50.554126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.554131 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.554136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.554140 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554145 | orchestrator | 2025-09-19 17:03:50.554150 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-19 17:03:50.554155 | orchestrator | Friday 19 September 2025 17:01:01 +0000 (0:00:00.385) 0:08:13.623 ****** 2025-09-19 17:03:50.554160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554164 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554169 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554174 | orchestrator | 2025-09-19 17:03:50.554178 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-19 17:03:50.554183 | orchestrator | Friday 19 September 2025 17:01:02 +0000 (0:00:00.328) 0:08:13.952 ****** 2025-09-19 17:03:50.554188 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554193 | orchestrator | 2025-09-19 17:03:50.554198 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-19 17:03:50.554203 | orchestrator | Friday 19 September 2025 17:01:02 +0000 (0:00:00.234) 0:08:14.186 ****** 2025-09-19 17:03:50.554207 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554212 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554217 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554221 | orchestrator | 2025-09-19 17:03:50.554226 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-19 17:03:50.554231 | orchestrator | Friday 19 September 2025 17:01:02 +0000 (0:00:00.538) 0:08:14.725 ****** 2025-09-19 17:03:50.554236 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554241 | orchestrator | 2025-09-19 17:03:50.554245 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-19 17:03:50.554254 | orchestrator | Friday 19 September 2025 17:01:03 +0000 (0:00:00.241) 0:08:14.967 ****** 2025-09-19 17:03:50.554258 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554263 | orchestrator | 2025-09-19 17:03:50.554268 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-19 17:03:50.554273 | orchestrator | Friday 19 September 2025 17:01:03 +0000 (0:00:00.211) 0:08:15.179 ****** 2025-09-19 17:03:50.554277 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554282 | orchestrator | 2025-09-19 17:03:50.554287 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-19 17:03:50.554292 | orchestrator | Friday 19 September 2025 17:01:03 +0000 (0:00:00.130) 0:08:15.310 ****** 2025-09-19 17:03:50.554296 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554301 | orchestrator | 2025-09-19 17:03:50.554309 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-19 17:03:50.554316 | orchestrator | Friday 19 September 2025 17:01:03 +0000 (0:00:00.211) 0:08:15.522 ****** 2025-09-19 17:03:50.554321 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554326 | orchestrator | 2025-09-19 17:03:50.554331 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-19 17:03:50.554336 | orchestrator | Friday 19 September 2025 17:01:03 +0000 (0:00:00.222) 0:08:15.744 ****** 2025-09-19 17:03:50.554340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.554345 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.554350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.554355 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554360 | orchestrator | 2025-09-19 17:03:50.554365 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-19 17:03:50.554369 | orchestrator | Friday 19 September 2025 17:01:04 +0000 (0:00:00.433) 0:08:16.178 ****** 2025-09-19 17:03:50.554374 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554379 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554384 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554389 | orchestrator | 2025-09-19 17:03:50.554394 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-19 17:03:50.554398 | orchestrator | Friday 19 September 2025 17:01:04 +0000 (0:00:00.309) 0:08:16.487 ****** 2025-09-19 17:03:50.554403 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554408 | orchestrator | 2025-09-19 17:03:50.554413 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-19 17:03:50.554417 | orchestrator | Friday 19 September 2025 17:01:05 +0000 (0:00:00.748) 0:08:17.235 ****** 2025-09-19 17:03:50.554422 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554427 | orchestrator | 2025-09-19 17:03:50.554432 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-19 17:03:50.554437 | orchestrator | 2025-09-19 17:03:50.554441 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.554446 | orchestrator | Friday 19 September 2025 17:01:06 +0000 (0:00:00.665) 0:08:17.901 ****** 2025-09-19 17:03:50.554451 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.554457 | orchestrator | 2025-09-19 17:03:50.554462 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.554466 | orchestrator | Friday 19 September 2025 17:01:07 +0000 (0:00:01.315) 0:08:19.217 ****** 2025-09-19 17:03:50.554471 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.554476 | orchestrator | 2025-09-19 17:03:50.554481 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.554486 | orchestrator | Friday 19 September 2025 17:01:08 +0000 (0:00:01.169) 0:08:20.386 ****** 2025-09-19 17:03:50.554495 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554500 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554505 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554510 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.554515 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.554519 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.554524 | orchestrator | 2025-09-19 17:03:50.554529 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.554534 | orchestrator | Friday 19 September 2025 17:01:09 +0000 (0:00:01.294) 0:08:21.681 ****** 2025-09-19 17:03:50.554539 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554543 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554548 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554553 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554563 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.554567 | orchestrator | 2025-09-19 17:03:50.554572 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.554577 | orchestrator | Friday 19 September 2025 17:01:10 +0000 (0:00:00.759) 0:08:22.441 ****** 2025-09-19 17:03:50.554582 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554587 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554592 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554596 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554601 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554606 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.554611 | orchestrator | 2025-09-19 17:03:50.554616 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.554620 | orchestrator | Friday 19 September 2025 17:01:11 +0000 (0:00:00.693) 0:08:23.134 ****** 2025-09-19 17:03:50.554625 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554635 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554640 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554644 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554649 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.554654 | orchestrator | 2025-09-19 17:03:50.554659 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.554664 | orchestrator | Friday 19 September 2025 17:01:12 +0000 (0:00:01.009) 0:08:24.143 ****** 2025-09-19 17:03:50.554668 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554673 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554678 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554683 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.554687 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.554692 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.554697 | orchestrator | 2025-09-19 17:03:50.554702 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.554707 | orchestrator | Friday 19 September 2025 17:01:13 +0000 (0:00:00.989) 0:08:25.133 ****** 2025-09-19 17:03:50.554711 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554716 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554724 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554729 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554736 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554741 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554746 | orchestrator | 2025-09-19 17:03:50.554751 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.554755 | orchestrator | Friday 19 September 2025 17:01:14 +0000 (0:00:00.811) 0:08:25.945 ****** 2025-09-19 17:03:50.554760 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554765 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554770 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554778 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554783 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554788 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554792 | orchestrator | 2025-09-19 17:03:50.554797 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.554802 | orchestrator | Friday 19 September 2025 17:01:14 +0000 (0:00:00.553) 0:08:26.498 ****** 2025-09-19 17:03:50.554807 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554812 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554816 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.554821 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.554826 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.554831 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.554836 | orchestrator | 2025-09-19 17:03:50.554840 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.554845 | orchestrator | Friday 19 September 2025 17:01:16 +0000 (0:00:01.433) 0:08:27.932 ****** 2025-09-19 17:03:50.554860 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554865 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554869 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.554874 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.554879 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.554883 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.554888 | orchestrator | 2025-09-19 17:03:50.554893 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.554898 | orchestrator | Friday 19 September 2025 17:01:17 +0000 (0:00:01.140) 0:08:29.073 ****** 2025-09-19 17:03:50.554902 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554907 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554912 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554917 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.554921 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.554926 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.554930 | orchestrator | 2025-09-19 17:03:50.554935 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.554940 | orchestrator | Friday 19 September 2025 17:01:18 +0000 (0:00:00.818) 0:08:29.891 ****** 2025-09-19 17:03:50.554945 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.554950 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.554954 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.554959 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.554964 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.554968 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.554973 | orchestrator | 2025-09-19 17:03:50.554978 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.554983 | orchestrator | Friday 19 September 2025 17:01:18 +0000 (0:00:00.597) 0:08:30.488 ****** 2025-09-19 17:03:50.554987 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.554992 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.554997 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555001 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.555006 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.555011 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.555016 | orchestrator | 2025-09-19 17:03:50.555020 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.555025 | orchestrator | Friday 19 September 2025 17:01:19 +0000 (0:00:00.850) 0:08:31.339 ****** 2025-09-19 17:03:50.555030 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555035 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555039 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555044 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.555049 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.555054 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.555058 | orchestrator | 2025-09-19 17:03:50.555063 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.555071 | orchestrator | Friday 19 September 2025 17:01:20 +0000 (0:00:00.619) 0:08:31.958 ****** 2025-09-19 17:03:50.555076 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555081 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555085 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555090 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.555095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.555099 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.555104 | orchestrator | 2025-09-19 17:03:50.555109 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.555114 | orchestrator | Friday 19 September 2025 17:01:20 +0000 (0:00:00.853) 0:08:32.811 ****** 2025-09-19 17:03:50.555118 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555123 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555128 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555132 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.555137 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.555142 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.555146 | orchestrator | 2025-09-19 17:03:50.555151 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.555156 | orchestrator | Friday 19 September 2025 17:01:21 +0000 (0:00:00.593) 0:08:33.404 ****** 2025-09-19 17:03:50.555160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555165 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555170 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555175 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:03:50.555179 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:03:50.555184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:03:50.555189 | orchestrator | 2025-09-19 17:03:50.555193 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.555198 | orchestrator | Friday 19 September 2025 17:01:22 +0000 (0:00:00.844) 0:08:34.249 ****** 2025-09-19 17:03:50.555206 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555211 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555220 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555225 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555229 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.555234 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.555239 | orchestrator | 2025-09-19 17:03:50.555243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.555248 | orchestrator | Friday 19 September 2025 17:01:22 +0000 (0:00:00.620) 0:08:34.870 ****** 2025-09-19 17:03:50.555253 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555258 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555262 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555267 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555272 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.555276 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.555281 | orchestrator | 2025-09-19 17:03:50.555286 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.555291 | orchestrator | Friday 19 September 2025 17:01:23 +0000 (0:00:00.858) 0:08:35.728 ****** 2025-09-19 17:03:50.555295 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555300 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555305 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555309 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555314 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.555318 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.555323 | orchestrator | 2025-09-19 17:03:50.555328 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-19 17:03:50.555333 | orchestrator | Friday 19 September 2025 17:01:25 +0000 (0:00:01.237) 0:08:36.966 ****** 2025-09-19 17:03:50.555337 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.555342 | orchestrator | 2025-09-19 17:03:50.555347 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-19 17:03:50.555355 | orchestrator | Friday 19 September 2025 17:01:29 +0000 (0:00:04.408) 0:08:41.374 ****** 2025-09-19 17:03:50.555360 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.555365 | orchestrator | 2025-09-19 17:03:50.555369 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-19 17:03:50.555374 | orchestrator | Friday 19 September 2025 17:01:31 +0000 (0:00:02.125) 0:08:43.499 ****** 2025-09-19 17:03:50.555379 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.555384 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.555388 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.555393 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555398 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.555402 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.555407 | orchestrator | 2025-09-19 17:03:50.555412 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-19 17:03:50.555417 | orchestrator | Friday 19 September 2025 17:01:33 +0000 (0:00:01.469) 0:08:44.969 ****** 2025-09-19 17:03:50.555421 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.555426 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.555431 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.555435 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.555440 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.555445 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.555450 | orchestrator | 2025-09-19 17:03:50.555454 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-19 17:03:50.555459 | orchestrator | Friday 19 September 2025 17:01:34 +0000 (0:00:01.323) 0:08:46.293 ****** 2025-09-19 17:03:50.555464 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.555469 | orchestrator | 2025-09-19 17:03:50.555474 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-19 17:03:50.555479 | orchestrator | Friday 19 September 2025 17:01:35 +0000 (0:00:01.203) 0:08:47.497 ****** 2025-09-19 17:03:50.555483 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.555488 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.555493 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.555498 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.555502 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.555507 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.555512 | orchestrator | 2025-09-19 17:03:50.555516 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-19 17:03:50.555521 | orchestrator | Friday 19 September 2025 17:01:37 +0000 (0:00:01.476) 0:08:48.973 ****** 2025-09-19 17:03:50.555526 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.555530 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.555535 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.555540 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.555544 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.555549 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.555554 | orchestrator | 2025-09-19 17:03:50.555558 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-19 17:03:50.555563 | orchestrator | Friday 19 September 2025 17:01:41 +0000 (0:00:04.334) 0:08:53.308 ****** 2025-09-19 17:03:50.555568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:03:50.555573 | orchestrator | 2025-09-19 17:03:50.555578 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-19 17:03:50.555583 | orchestrator | Friday 19 September 2025 17:01:42 +0000 (0:00:01.318) 0:08:54.627 ****** 2025-09-19 17:03:50.555587 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555592 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555600 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555605 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555609 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.555614 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.555619 | orchestrator | 2025-09-19 17:03:50.555624 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-19 17:03:50.555628 | orchestrator | Friday 19 September 2025 17:01:43 +0000 (0:00:00.659) 0:08:55.287 ****** 2025-09-19 17:03:50.555636 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.555641 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.555648 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.555653 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:03:50.555658 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:03:50.555662 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:03:50.555667 | orchestrator | 2025-09-19 17:03:50.555672 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-19 17:03:50.555677 | orchestrator | Friday 19 September 2025 17:01:45 +0000 (0:00:02.523) 0:08:57.810 ****** 2025-09-19 17:03:50.555681 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555686 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555691 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555696 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:03:50.555700 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:03:50.555705 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:03:50.555710 | orchestrator | 2025-09-19 17:03:50.555715 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-19 17:03:50.555720 | orchestrator | 2025-09-19 17:03:50.555725 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.555729 | orchestrator | Friday 19 September 2025 17:01:46 +0000 (0:00:00.872) 0:08:58.683 ****** 2025-09-19 17:03:50.555734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.555739 | orchestrator | 2025-09-19 17:03:50.555744 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.555749 | orchestrator | Friday 19 September 2025 17:01:47 +0000 (0:00:00.796) 0:08:59.479 ****** 2025-09-19 17:03:50.555753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.555758 | orchestrator | 2025-09-19 17:03:50.555763 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.555768 | orchestrator | Friday 19 September 2025 17:01:48 +0000 (0:00:00.512) 0:08:59.992 ****** 2025-09-19 17:03:50.555773 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555778 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555782 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555787 | orchestrator | 2025-09-19 17:03:50.555792 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.555797 | orchestrator | Friday 19 September 2025 17:01:48 +0000 (0:00:00.603) 0:09:00.596 ****** 2025-09-19 17:03:50.555802 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555806 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555811 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555816 | orchestrator | 2025-09-19 17:03:50.555821 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.555826 | orchestrator | Friday 19 September 2025 17:01:49 +0000 (0:00:00.790) 0:09:01.386 ****** 2025-09-19 17:03:50.555830 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555835 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555840 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555845 | orchestrator | 2025-09-19 17:03:50.555862 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.555867 | orchestrator | Friday 19 September 2025 17:01:50 +0000 (0:00:00.739) 0:09:02.126 ****** 2025-09-19 17:03:50.555871 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555880 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.555885 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.555890 | orchestrator | 2025-09-19 17:03:50.555895 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.555899 | orchestrator | Friday 19 September 2025 17:01:51 +0000 (0:00:00.804) 0:09:02.931 ****** 2025-09-19 17:03:50.555904 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555909 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555914 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555919 | orchestrator | 2025-09-19 17:03:50.555924 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.555929 | orchestrator | Friday 19 September 2025 17:01:51 +0000 (0:00:00.577) 0:09:03.508 ****** 2025-09-19 17:03:50.555933 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555938 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555943 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555948 | orchestrator | 2025-09-19 17:03:50.555953 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.555958 | orchestrator | Friday 19 September 2025 17:01:51 +0000 (0:00:00.351) 0:09:03.859 ****** 2025-09-19 17:03:50.555962 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.555967 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.555972 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.555977 | orchestrator | 2025-09-19 17:03:50.555982 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.555987 | orchestrator | Friday 19 September 2025 17:01:52 +0000 (0:00:00.329) 0:09:04.189 ****** 2025-09-19 17:03:50.555991 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.555996 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556001 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556006 | orchestrator | 2025-09-19 17:03:50.556011 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.556015 | orchestrator | Friday 19 September 2025 17:01:53 +0000 (0:00:00.787) 0:09:04.976 ****** 2025-09-19 17:03:50.556020 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556025 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556030 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556035 | orchestrator | 2025-09-19 17:03:50.556039 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.556044 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:01.062) 0:09:06.038 ****** 2025-09-19 17:03:50.556049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556054 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556059 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556064 | orchestrator | 2025-09-19 17:03:50.556069 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.556073 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:00.288) 0:09:06.326 ****** 2025-09-19 17:03:50.556081 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556086 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556093 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556098 | orchestrator | 2025-09-19 17:03:50.556103 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.556108 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:00.302) 0:09:06.628 ****** 2025-09-19 17:03:50.556113 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556118 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556122 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556127 | orchestrator | 2025-09-19 17:03:50.556132 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.556137 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.321) 0:09:06.950 ****** 2025-09-19 17:03:50.556142 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556147 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556154 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556159 | orchestrator | 2025-09-19 17:03:50.556164 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.556169 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.694) 0:09:07.645 ****** 2025-09-19 17:03:50.556174 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556179 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556183 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556188 | orchestrator | 2025-09-19 17:03:50.556193 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.556198 | orchestrator | Friday 19 September 2025 17:01:56 +0000 (0:00:00.333) 0:09:07.978 ****** 2025-09-19 17:03:50.556203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556208 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556212 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556217 | orchestrator | 2025-09-19 17:03:50.556222 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.556227 | orchestrator | Friday 19 September 2025 17:01:56 +0000 (0:00:00.311) 0:09:08.289 ****** 2025-09-19 17:03:50.556232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556237 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556241 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556246 | orchestrator | 2025-09-19 17:03:50.556251 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.556256 | orchestrator | Friday 19 September 2025 17:01:56 +0000 (0:00:00.291) 0:09:08.581 ****** 2025-09-19 17:03:50.556261 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556265 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556270 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556275 | orchestrator | 2025-09-19 17:03:50.556280 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.556285 | orchestrator | Friday 19 September 2025 17:01:57 +0000 (0:00:00.635) 0:09:09.217 ****** 2025-09-19 17:03:50.556290 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556294 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556299 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556304 | orchestrator | 2025-09-19 17:03:50.556309 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.556314 | orchestrator | Friday 19 September 2025 17:01:57 +0000 (0:00:00.338) 0:09:09.555 ****** 2025-09-19 17:03:50.556319 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556323 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556328 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556333 | orchestrator | 2025-09-19 17:03:50.556338 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-19 17:03:50.556343 | orchestrator | Friday 19 September 2025 17:01:58 +0000 (0:00:00.542) 0:09:10.097 ****** 2025-09-19 17:03:50.556348 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556352 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556357 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-19 17:03:50.556362 | orchestrator | 2025-09-19 17:03:50.556367 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-19 17:03:50.556372 | orchestrator | Friday 19 September 2025 17:01:58 +0000 (0:00:00.676) 0:09:10.774 ****** 2025-09-19 17:03:50.556377 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.556381 | orchestrator | 2025-09-19 17:03:50.556386 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-19 17:03:50.556391 | orchestrator | Friday 19 September 2025 17:02:01 +0000 (0:00:02.327) 0:09:13.101 ****** 2025-09-19 17:03:50.556397 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-19 17:03:50.556405 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556410 | orchestrator | 2025-09-19 17:03:50.556415 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-19 17:03:50.556419 | orchestrator | Friday 19 September 2025 17:02:01 +0000 (0:00:00.157) 0:09:13.258 ****** 2025-09-19 17:03:50.556425 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:03:50.556435 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:03:50.556440 | orchestrator | 2025-09-19 17:03:50.556445 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-19 17:03:50.556452 | orchestrator | Friday 19 September 2025 17:02:09 +0000 (0:00:08.305) 0:09:21.564 ****** 2025-09-19 17:03:50.556459 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:03:50.556464 | orchestrator | 2025-09-19 17:03:50.556469 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-19 17:03:50.556474 | orchestrator | Friday 19 September 2025 17:02:13 +0000 (0:00:04.151) 0:09:25.715 ****** 2025-09-19 17:03:50.556479 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.556484 | orchestrator | 2025-09-19 17:03:50.556488 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-19 17:03:50.556493 | orchestrator | Friday 19 September 2025 17:02:14 +0000 (0:00:00.602) 0:09:26.318 ****** 2025-09-19 17:03:50.556498 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 17:03:50.556503 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 17:03:50.556508 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-19 17:03:50.556512 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-19 17:03:50.556517 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-19 17:03:50.556522 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-19 17:03:50.556527 | orchestrator | 2025-09-19 17:03:50.556532 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-19 17:03:50.556536 | orchestrator | Friday 19 September 2025 17:02:15 +0000 (0:00:01.103) 0:09:27.421 ****** 2025-09-19 17:03:50.556541 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.556546 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.556551 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.556556 | orchestrator | 2025-09-19 17:03:50.556560 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-19 17:03:50.556565 | orchestrator | Friday 19 September 2025 17:02:18 +0000 (0:00:02.520) 0:09:29.941 ****** 2025-09-19 17:03:50.556570 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:03:50.556575 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.556580 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556584 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:03:50.556589 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 17:03:50.556594 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556599 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:03:50.556604 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 17:03:50.556608 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556613 | orchestrator | 2025-09-19 17:03:50.556618 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-19 17:03:50.556626 | orchestrator | Friday 19 September 2025 17:02:19 +0000 (0:00:01.240) 0:09:31.181 ****** 2025-09-19 17:03:50.556631 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556636 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556640 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556645 | orchestrator | 2025-09-19 17:03:50.556650 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-19 17:03:50.556655 | orchestrator | Friday 19 September 2025 17:02:22 +0000 (0:00:02.745) 0:09:33.927 ****** 2025-09-19 17:03:50.556660 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.556664 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.556669 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.556674 | orchestrator | 2025-09-19 17:03:50.556679 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-19 17:03:50.556683 | orchestrator | Friday 19 September 2025 17:02:22 +0000 (0:00:00.594) 0:09:34.522 ****** 2025-09-19 17:03:50.556688 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.556693 | orchestrator | 2025-09-19 17:03:50.556698 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-19 17:03:50.556703 | orchestrator | Friday 19 September 2025 17:02:23 +0000 (0:00:00.573) 0:09:35.095 ****** 2025-09-19 17:03:50.556707 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.556712 | orchestrator | 2025-09-19 17:03:50.556717 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-19 17:03:50.556722 | orchestrator | Friday 19 September 2025 17:02:24 +0000 (0:00:00.778) 0:09:35.874 ****** 2025-09-19 17:03:50.556727 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556732 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556736 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556741 | orchestrator | 2025-09-19 17:03:50.556746 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-19 17:03:50.556751 | orchestrator | Friday 19 September 2025 17:02:25 +0000 (0:00:01.293) 0:09:37.167 ****** 2025-09-19 17:03:50.556755 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556760 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556765 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556770 | orchestrator | 2025-09-19 17:03:50.556775 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-19 17:03:50.556779 | orchestrator | Friday 19 September 2025 17:02:26 +0000 (0:00:01.201) 0:09:38.369 ****** 2025-09-19 17:03:50.556784 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556789 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556794 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556799 | orchestrator | 2025-09-19 17:03:50.556803 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-19 17:03:50.556808 | orchestrator | Friday 19 September 2025 17:02:28 +0000 (0:00:01.742) 0:09:40.112 ****** 2025-09-19 17:03:50.556813 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556820 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556825 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556830 | orchestrator | 2025-09-19 17:03:50.556838 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-19 17:03:50.556843 | orchestrator | Friday 19 September 2025 17:02:30 +0000 (0:00:02.259) 0:09:42.371 ****** 2025-09-19 17:03:50.556872 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556878 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556882 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556887 | orchestrator | 2025-09-19 17:03:50.556892 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.556897 | orchestrator | Friday 19 September 2025 17:02:31 +0000 (0:00:01.232) 0:09:43.604 ****** 2025-09-19 17:03:50.556905 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556910 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556915 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556920 | orchestrator | 2025-09-19 17:03:50.556924 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-19 17:03:50.556929 | orchestrator | Friday 19 September 2025 17:02:32 +0000 (0:00:00.933) 0:09:44.538 ****** 2025-09-19 17:03:50.556934 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.556939 | orchestrator | 2025-09-19 17:03:50.556944 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-19 17:03:50.556949 | orchestrator | Friday 19 September 2025 17:02:33 +0000 (0:00:00.536) 0:09:45.074 ****** 2025-09-19 17:03:50.556953 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.556958 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.556963 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.556968 | orchestrator | 2025-09-19 17:03:50.556973 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-19 17:03:50.556978 | orchestrator | Friday 19 September 2025 17:02:33 +0000 (0:00:00.309) 0:09:45.383 ****** 2025-09-19 17:03:50.556982 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.556987 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.556992 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.556997 | orchestrator | 2025-09-19 17:03:50.557001 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-19 17:03:50.557006 | orchestrator | Friday 19 September 2025 17:02:35 +0000 (0:00:01.616) 0:09:47.000 ****** 2025-09-19 17:03:50.557011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.557016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.557021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.557025 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557030 | orchestrator | 2025-09-19 17:03:50.557035 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-19 17:03:50.557040 | orchestrator | Friday 19 September 2025 17:02:35 +0000 (0:00:00.800) 0:09:47.801 ****** 2025-09-19 17:03:50.557044 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557049 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557054 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557059 | orchestrator | 2025-09-19 17:03:50.557063 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 17:03:50.557068 | orchestrator | 2025-09-19 17:03:50.557073 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-19 17:03:50.557078 | orchestrator | Friday 19 September 2025 17:02:36 +0000 (0:00:00.596) 0:09:48.398 ****** 2025-09-19 17:03:50.557083 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.557088 | orchestrator | 2025-09-19 17:03:50.557092 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-19 17:03:50.557097 | orchestrator | Friday 19 September 2025 17:02:37 +0000 (0:00:00.747) 0:09:49.146 ****** 2025-09-19 17:03:50.557102 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.557107 | orchestrator | 2025-09-19 17:03:50.557111 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-19 17:03:50.557116 | orchestrator | Friday 19 September 2025 17:02:37 +0000 (0:00:00.526) 0:09:49.672 ****** 2025-09-19 17:03:50.557121 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557126 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557130 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557135 | orchestrator | 2025-09-19 17:03:50.557140 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-19 17:03:50.557145 | orchestrator | Friday 19 September 2025 17:02:38 +0000 (0:00:00.527) 0:09:50.199 ****** 2025-09-19 17:03:50.557153 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557157 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557162 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557167 | orchestrator | 2025-09-19 17:03:50.557172 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-19 17:03:50.557176 | orchestrator | Friday 19 September 2025 17:02:39 +0000 (0:00:00.738) 0:09:50.937 ****** 2025-09-19 17:03:50.557181 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557186 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557191 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557196 | orchestrator | 2025-09-19 17:03:50.557200 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-19 17:03:50.557205 | orchestrator | Friday 19 September 2025 17:02:39 +0000 (0:00:00.715) 0:09:51.653 ****** 2025-09-19 17:03:50.557210 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557215 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557220 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557225 | orchestrator | 2025-09-19 17:03:50.557229 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-19 17:03:50.557234 | orchestrator | Friday 19 September 2025 17:02:40 +0000 (0:00:00.806) 0:09:52.459 ****** 2025-09-19 17:03:50.557239 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557244 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557248 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557253 | orchestrator | 2025-09-19 17:03:50.557261 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-19 17:03:50.557268 | orchestrator | Friday 19 September 2025 17:02:41 +0000 (0:00:00.561) 0:09:53.021 ****** 2025-09-19 17:03:50.557273 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557278 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557283 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557288 | orchestrator | 2025-09-19 17:03:50.557292 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-19 17:03:50.557297 | orchestrator | Friday 19 September 2025 17:02:41 +0000 (0:00:00.318) 0:09:53.339 ****** 2025-09-19 17:03:50.557302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557307 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557311 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557316 | orchestrator | 2025-09-19 17:03:50.557321 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-19 17:03:50.557326 | orchestrator | Friday 19 September 2025 17:02:41 +0000 (0:00:00.308) 0:09:53.647 ****** 2025-09-19 17:03:50.557331 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557335 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557340 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557345 | orchestrator | 2025-09-19 17:03:50.557350 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-19 17:03:50.557355 | orchestrator | Friday 19 September 2025 17:02:42 +0000 (0:00:00.827) 0:09:54.475 ****** 2025-09-19 17:03:50.557359 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557364 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557369 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557374 | orchestrator | 2025-09-19 17:03:50.557378 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-19 17:03:50.557383 | orchestrator | Friday 19 September 2025 17:02:43 +0000 (0:00:00.930) 0:09:55.406 ****** 2025-09-19 17:03:50.557388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557393 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557398 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557402 | orchestrator | 2025-09-19 17:03:50.557407 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-19 17:03:50.557411 | orchestrator | Friday 19 September 2025 17:02:43 +0000 (0:00:00.312) 0:09:55.718 ****** 2025-09-19 17:03:50.557416 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557423 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557428 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557432 | orchestrator | 2025-09-19 17:03:50.557437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-19 17:03:50.557442 | orchestrator | Friday 19 September 2025 17:02:44 +0000 (0:00:00.326) 0:09:56.045 ****** 2025-09-19 17:03:50.557446 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557451 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557455 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557460 | orchestrator | 2025-09-19 17:03:50.557464 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-19 17:03:50.557469 | orchestrator | Friday 19 September 2025 17:02:44 +0000 (0:00:00.333) 0:09:56.379 ****** 2025-09-19 17:03:50.557473 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557478 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557482 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557487 | orchestrator | 2025-09-19 17:03:50.557491 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-19 17:03:50.557496 | orchestrator | Friday 19 September 2025 17:02:45 +0000 (0:00:00.542) 0:09:56.922 ****** 2025-09-19 17:03:50.557500 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557505 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557509 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557514 | orchestrator | 2025-09-19 17:03:50.557519 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-19 17:03:50.557523 | orchestrator | Friday 19 September 2025 17:02:45 +0000 (0:00:00.327) 0:09:57.249 ****** 2025-09-19 17:03:50.557528 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557532 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557537 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557541 | orchestrator | 2025-09-19 17:03:50.557546 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-19 17:03:50.557551 | orchestrator | Friday 19 September 2025 17:02:45 +0000 (0:00:00.322) 0:09:57.572 ****** 2025-09-19 17:03:50.557555 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557560 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557564 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557569 | orchestrator | 2025-09-19 17:03:50.557573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-19 17:03:50.557578 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:00.319) 0:09:57.891 ****** 2025-09-19 17:03:50.557582 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557587 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557591 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557596 | orchestrator | 2025-09-19 17:03:50.557600 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-19 17:03:50.557605 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:00.576) 0:09:58.468 ****** 2025-09-19 17:03:50.557609 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557614 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557619 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557623 | orchestrator | 2025-09-19 17:03:50.557627 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-19 17:03:50.557632 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:00.365) 0:09:58.833 ****** 2025-09-19 17:03:50.557637 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.557641 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.557646 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.557650 | orchestrator | 2025-09-19 17:03:50.557655 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-19 17:03:50.557659 | orchestrator | Friday 19 September 2025 17:02:47 +0000 (0:00:00.534) 0:09:59.368 ****** 2025-09-19 17:03:50.557664 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.557668 | orchestrator | 2025-09-19 17:03:50.557677 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 17:03:50.557685 | orchestrator | Friday 19 September 2025 17:02:48 +0000 (0:00:00.765) 0:10:00.133 ****** 2025-09-19 17:03:50.557692 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557697 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.557702 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.557706 | orchestrator | 2025-09-19 17:03:50.557711 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 17:03:50.557715 | orchestrator | Friday 19 September 2025 17:02:50 +0000 (0:00:02.263) 0:10:02.397 ****** 2025-09-19 17:03:50.557720 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:03:50.557725 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-19 17:03:50.557729 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.557734 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:03:50.557738 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-19 17:03:50.557743 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:03:50.557747 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.557752 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-19 17:03:50.557756 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.557761 | orchestrator | 2025-09-19 17:03:50.557765 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-19 17:03:50.557770 | orchestrator | Friday 19 September 2025 17:02:51 +0000 (0:00:01.235) 0:10:03.632 ****** 2025-09-19 17:03:50.557775 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.557779 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.557784 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.557788 | orchestrator | 2025-09-19 17:03:50.557793 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-19 17:03:50.557797 | orchestrator | Friday 19 September 2025 17:02:52 +0000 (0:00:00.322) 0:10:03.955 ****** 2025-09-19 17:03:50.557802 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.557807 | orchestrator | 2025-09-19 17:03:50.557811 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-19 17:03:50.557816 | orchestrator | Friday 19 September 2025 17:02:52 +0000 (0:00:00.759) 0:10:04.714 ****** 2025-09-19 17:03:50.557820 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.557825 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.557830 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.557834 | orchestrator | 2025-09-19 17:03:50.557839 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-19 17:03:50.557843 | orchestrator | Friday 19 September 2025 17:02:53 +0000 (0:00:00.861) 0:10:05.575 ****** 2025-09-19 17:03:50.557858 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557863 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 17:03:50.557867 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557872 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 17:03:50.557877 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557884 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-19 17:03:50.557889 | orchestrator | 2025-09-19 17:03:50.557893 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-19 17:03:50.557898 | orchestrator | Friday 19 September 2025 17:02:58 +0000 (0:00:05.035) 0:10:10.611 ****** 2025-09-19 17:03:50.557903 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557907 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.557912 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557916 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.557921 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:03:50.557925 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:03:50.557930 | orchestrator | 2025-09-19 17:03:50.557934 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-19 17:03:50.557939 | orchestrator | Friday 19 September 2025 17:03:01 +0000 (0:00:02.964) 0:10:13.576 ****** 2025-09-19 17:03:50.557944 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:03:50.557948 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.557953 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:03:50.557957 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.557962 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:03:50.557966 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.557971 | orchestrator | 2025-09-19 17:03:50.557975 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-19 17:03:50.557983 | orchestrator | Friday 19 September 2025 17:03:02 +0000 (0:00:01.234) 0:10:14.810 ****** 2025-09-19 17:03:50.557990 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-19 17:03:50.557994 | orchestrator | 2025-09-19 17:03:50.557999 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-19 17:03:50.558003 | orchestrator | Friday 19 September 2025 17:03:03 +0000 (0:00:00.234) 0:10:15.044 ****** 2025-09-19 17:03:50.558008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558032 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558045 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558050 | orchestrator | 2025-09-19 17:03:50.558054 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-19 17:03:50.558059 | orchestrator | Friday 19 September 2025 17:03:03 +0000 (0:00:00.571) 0:10:15.616 ****** 2025-09-19 17:03:50.558063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-19 17:03:50.558090 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558094 | orchestrator | 2025-09-19 17:03:50.558099 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-19 17:03:50.558103 | orchestrator | Friday 19 September 2025 17:03:04 +0000 (0:00:00.585) 0:10:16.202 ****** 2025-09-19 17:03:50.558108 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 17:03:50.558112 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 17:03:50.558117 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 17:03:50.558121 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 17:03:50.558126 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-19 17:03:50.558131 | orchestrator | 2025-09-19 17:03:50.558135 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-19 17:03:50.558140 | orchestrator | Friday 19 September 2025 17:03:36 +0000 (0:00:31.865) 0:10:48.067 ****** 2025-09-19 17:03:50.558144 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558149 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.558154 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.558158 | orchestrator | 2025-09-19 17:03:50.558163 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-19 17:03:50.558167 | orchestrator | Friday 19 September 2025 17:03:36 +0000 (0:00:00.348) 0:10:48.416 ****** 2025-09-19 17:03:50.558172 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558176 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.558181 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.558185 | orchestrator | 2025-09-19 17:03:50.558190 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-19 17:03:50.558194 | orchestrator | Friday 19 September 2025 17:03:37 +0000 (0:00:00.569) 0:10:48.985 ****** 2025-09-19 17:03:50.558199 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.558204 | orchestrator | 2025-09-19 17:03:50.558208 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-19 17:03:50.558213 | orchestrator | Friday 19 September 2025 17:03:37 +0000 (0:00:00.575) 0:10:49.561 ****** 2025-09-19 17:03:50.558217 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.558222 | orchestrator | 2025-09-19 17:03:50.558229 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-19 17:03:50.558236 | orchestrator | Friday 19 September 2025 17:03:38 +0000 (0:00:00.806) 0:10:50.367 ****** 2025-09-19 17:03:50.558241 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.558246 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.558250 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.558255 | orchestrator | 2025-09-19 17:03:50.558259 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-19 17:03:50.558264 | orchestrator | Friday 19 September 2025 17:03:39 +0000 (0:00:01.385) 0:10:51.753 ****** 2025-09-19 17:03:50.558268 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.558273 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.558277 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.558286 | orchestrator | 2025-09-19 17:03:50.558290 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-19 17:03:50.558295 | orchestrator | Friday 19 September 2025 17:03:41 +0000 (0:00:01.192) 0:10:52.946 ****** 2025-09-19 17:03:50.558300 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:03:50.558304 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:03:50.558309 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:03:50.558313 | orchestrator | 2025-09-19 17:03:50.558318 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-19 17:03:50.558322 | orchestrator | Friday 19 September 2025 17:03:42 +0000 (0:00:01.741) 0:10:54.687 ****** 2025-09-19 17:03:50.558327 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.558331 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.558336 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-19 17:03:50.558341 | orchestrator | 2025-09-19 17:03:50.558345 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-19 17:03:50.558350 | orchestrator | Friday 19 September 2025 17:03:45 +0000 (0:00:02.779) 0:10:57.467 ****** 2025-09-19 17:03:50.558354 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558359 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.558363 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.558368 | orchestrator | 2025-09-19 17:03:50.558372 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-19 17:03:50.558377 | orchestrator | Friday 19 September 2025 17:03:45 +0000 (0:00:00.339) 0:10:57.807 ****** 2025-09-19 17:03:50.558382 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:03:50.558386 | orchestrator | 2025-09-19 17:03:50.558391 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-19 17:03:50.558395 | orchestrator | Friday 19 September 2025 17:03:46 +0000 (0:00:00.785) 0:10:58.592 ****** 2025-09-19 17:03:50.558400 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.558405 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.558409 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.558413 | orchestrator | 2025-09-19 17:03:50.558418 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-19 17:03:50.558423 | orchestrator | Friday 19 September 2025 17:03:47 +0000 (0:00:00.320) 0:10:58.912 ****** 2025-09-19 17:03:50.558427 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558432 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:03:50.558436 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:03:50.558441 | orchestrator | 2025-09-19 17:03:50.558445 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-19 17:03:50.558450 | orchestrator | Friday 19 September 2025 17:03:47 +0000 (0:00:00.331) 0:10:59.243 ****** 2025-09-19 17:03:50.558454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:03:50.558459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:03:50.558463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:03:50.558468 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:03:50.558472 | orchestrator | 2025-09-19 17:03:50.558477 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-19 17:03:50.558482 | orchestrator | Friday 19 September 2025 17:03:48 +0000 (0:00:01.100) 0:11:00.343 ****** 2025-09-19 17:03:50.558486 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:03:50.558491 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:03:50.558495 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:03:50.558500 | orchestrator | 2025-09-19 17:03:50.558504 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:03:50.558513 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-19 17:03:50.558518 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-19 17:03:50.558522 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-19 17:03:50.558527 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-19 17:03:50.558532 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-19 17:03:50.558539 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-19 17:03:50.558544 | orchestrator | 2025-09-19 17:03:50.558548 | orchestrator | 2025-09-19 17:03:50.558553 | orchestrator | 2025-09-19 17:03:50.558560 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:03:50.558565 | orchestrator | Friday 19 September 2025 17:03:48 +0000 (0:00:00.252) 0:11:00.596 ****** 2025-09-19 17:03:50.558569 | orchestrator | =============================================================================== 2025-09-19 17:03:50.558574 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.32s 2025-09-19 17:03:50.558579 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.77s 2025-09-19 17:03:50.558583 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.77s 2025-09-19 17:03:50.558588 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.87s 2025-09-19 17:03:50.558592 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.10s 2025-09-19 17:03:50.558596 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.05s 2025-09-19 17:03:50.558601 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.80s 2025-09-19 17:03:50.558606 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.76s 2025-09-19 17:03:50.558610 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.79s 2025-09-19 17:03:50.558615 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.31s 2025-09-19 17:03:50.558619 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.84s 2025-09-19 17:03:50.558623 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.44s 2025-09-19 17:03:50.558628 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.11s 2025-09-19 17:03:50.558633 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.04s 2025-09-19 17:03:50.558637 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.41s 2025-09-19 17:03:50.558642 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.41s 2025-09-19 17:03:50.558646 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 4.33s 2025-09-19 17:03:50.558650 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.15s 2025-09-19 17:03:50.558655 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.15s 2025-09-19 17:03:50.558659 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.07s 2025-09-19 17:03:50.558664 | orchestrator | 2025-09-19 17:03:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:53.586575 | orchestrator | 2025-09-19 17:03:53 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:03:53.586993 | orchestrator | 2025-09-19 17:03:53 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:53.589843 | orchestrator | 2025-09-19 17:03:53 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:53.589909 | orchestrator | 2025-09-19 17:03:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:56.628706 | orchestrator | 2025-09-19 17:03:56 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:03:56.631237 | orchestrator | 2025-09-19 17:03:56 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:56.633141 | orchestrator | 2025-09-19 17:03:56 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:56.633481 | orchestrator | 2025-09-19 17:03:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:03:59.672440 | orchestrator | 2025-09-19 17:03:59 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:03:59.673939 | orchestrator | 2025-09-19 17:03:59 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:03:59.675474 | orchestrator | 2025-09-19 17:03:59 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:03:59.675497 | orchestrator | 2025-09-19 17:03:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:02.729432 | orchestrator | 2025-09-19 17:04:02 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:02.731776 | orchestrator | 2025-09-19 17:04:02 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:02.734062 | orchestrator | 2025-09-19 17:04:02 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:02.734371 | orchestrator | 2025-09-19 17:04:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:05.780465 | orchestrator | 2025-09-19 17:04:05 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:05.781908 | orchestrator | 2025-09-19 17:04:05 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:05.784558 | orchestrator | 2025-09-19 17:04:05 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:05.784648 | orchestrator | 2025-09-19 17:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:08.823500 | orchestrator | 2025-09-19 17:04:08 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:08.823596 | orchestrator | 2025-09-19 17:04:08 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:08.823675 | orchestrator | 2025-09-19 17:04:08 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:08.824289 | orchestrator | 2025-09-19 17:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:11.864564 | orchestrator | 2025-09-19 17:04:11 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:11.868366 | orchestrator | 2025-09-19 17:04:11 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:11.869978 | orchestrator | 2025-09-19 17:04:11 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:11.870176 | orchestrator | 2025-09-19 17:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:14.910323 | orchestrator | 2025-09-19 17:04:14 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:14.912022 | orchestrator | 2025-09-19 17:04:14 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:14.913828 | orchestrator | 2025-09-19 17:04:14 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:14.914086 | orchestrator | 2025-09-19 17:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:17.961989 | orchestrator | 2025-09-19 17:04:17 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:17.963435 | orchestrator | 2025-09-19 17:04:17 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:17.964327 | orchestrator | 2025-09-19 17:04:17 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:17.964732 | orchestrator | 2025-09-19 17:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:21.014535 | orchestrator | 2025-09-19 17:04:21 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:21.016162 | orchestrator | 2025-09-19 17:04:21 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:21.017228 | orchestrator | 2025-09-19 17:04:21 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state STARTED 2025-09-19 17:04:21.017268 | orchestrator | 2025-09-19 17:04:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:24.061911 | orchestrator | 2025-09-19 17:04:24 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:24.063202 | orchestrator | 2025-09-19 17:04:24 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:24.065480 | orchestrator | 2025-09-19 17:04:24 | INFO  | Task 8a656981-50f2-4057-81f4-9e6507b13637 is in state SUCCESS 2025-09-19 17:04:24.065526 | orchestrator | 2025-09-19 17:04:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:24.067064 | orchestrator | 2025-09-19 17:04:24.067100 | orchestrator | 2025-09-19 17:04:24.067109 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:04:24.067118 | orchestrator | 2025-09-19 17:04:24.067126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:04:24.067134 | orchestrator | Friday 19 September 2025 17:01:33 +0000 (0:00:00.282) 0:00:00.282 ****** 2025-09-19 17:04:24.067142 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:24.067151 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:24.067159 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:24.067167 | orchestrator | 2025-09-19 17:04:24.067176 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:04:24.067183 | orchestrator | Friday 19 September 2025 17:01:33 +0000 (0:00:00.286) 0:00:00.568 ****** 2025-09-19 17:04:24.067192 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-19 17:04:24.067200 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-19 17:04:24.067208 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-19 17:04:24.067216 | orchestrator | 2025-09-19 17:04:24.067224 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-19 17:04:24.067232 | orchestrator | 2025-09-19 17:04:24.067246 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 17:04:24.067260 | orchestrator | Friday 19 September 2025 17:01:34 +0000 (0:00:00.417) 0:00:00.986 ****** 2025-09-19 17:04:24.067273 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:24.067286 | orchestrator | 2025-09-19 17:04:24.067299 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-19 17:04:24.067313 | orchestrator | Friday 19 September 2025 17:01:34 +0000 (0:00:00.521) 0:00:01.508 ****** 2025-09-19 17:04:24.067343 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 17:04:24.067358 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 17:04:24.067387 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-19 17:04:24.067396 | orchestrator | 2025-09-19 17:04:24.067404 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-19 17:04:24.067411 | orchestrator | Friday 19 September 2025 17:01:36 +0000 (0:00:01.628) 0:00:03.136 ****** 2025-09-19 17:04:24.067422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067452 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067503 | orchestrator | 2025-09-19 17:04:24.067512 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 17:04:24.067520 | orchestrator | Friday 19 September 2025 17:01:38 +0000 (0:00:01.801) 0:00:04.938 ****** 2025-09-19 17:04:24.067528 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:24.067536 | orchestrator | 2025-09-19 17:04:24.067544 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-19 17:04:24.067552 | orchestrator | Friday 19 September 2025 17:01:38 +0000 (0:00:00.509) 0:00:05.448 ****** 2025-09-19 17:04:24.067567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.067639 | orchestrator | 2025-09-19 17:04:24.067647 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-19 17:04:24.067659 | orchestrator | Friday 19 September 2025 17:01:41 +0000 (0:00:03.063) 0:00:08.511 ****** 2025-09-19 17:04:24.067668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067685 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:24.067694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067709 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067741 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:24.067750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067759 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:24.067767 | orchestrator | 2025-09-19 17:04:24.067775 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-19 17:04:24.067783 | orchestrator | Friday 19 September 2025 17:01:42 +0000 (0:00:00.922) 0:00:09.434 ****** 2025-09-19 17:04:24.067791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067822 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:24.067834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067876 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:24.067885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-19 17:04:24.067901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-19 17:04:24.067915 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:24.067923 | orchestrator | 2025-09-19 17:04:24.067931 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-19 17:04:24.067939 | orchestrator | Friday 19 September 2025 17:01:44 +0000 (0:00:01.587) 0:00:11.021 ****** 2025-09-19 17:04:24.067951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067969 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.067983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068018 | orchestrator | 2025-09-19 17:04:24.068027 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-19 17:04:24.068035 | orchestrator | Friday 19 September 2025 17:01:46 +0000 (0:00:02.557) 0:00:13.579 ****** 2025-09-19 17:04:24.068043 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068051 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:24.068059 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:24.068067 | orchestrator | 2025-09-19 17:04:24.068075 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-19 17:04:24.068083 | orchestrator | Friday 19 September 2025 17:01:49 +0000 (0:00:02.925) 0:00:16.504 ****** 2025-09-19 17:04:24.068091 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068098 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:24.068106 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:24.068114 | orchestrator | 2025-09-19 17:04:24.068122 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-19 17:04:24.068130 | orchestrator | Friday 19 September 2025 17:01:52 +0000 (0:00:02.611) 0:00:19.115 ****** 2025-09-19 17:04:24.068138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.068157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.068169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-19 17:04:24.068179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068209 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-19 17:04:24.068218 | orchestrator | 2025-09-19 17:04:24.068226 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 17:04:24.068234 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:02.136) 0:00:21.252 ****** 2025-09-19 17:04:24.068244 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:24.068258 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:24.068271 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:24.068284 | orchestrator | 2025-09-19 17:04:24.068296 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 17:04:24.068309 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:00.360) 0:00:21.613 ****** 2025-09-19 17:04:24.068322 | orchestrator | 2025-09-19 17:04:24.068334 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 17:04:24.068352 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.140) 0:00:21.754 ****** 2025-09-19 17:04:24.068366 | orchestrator | 2025-09-19 17:04:24.068380 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-19 17:04:24.068393 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.139) 0:00:21.893 ****** 2025-09-19 17:04:24.068406 | orchestrator | 2025-09-19 17:04:24.068415 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-19 17:04:24.068422 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.083) 0:00:21.977 ****** 2025-09-19 17:04:24.068430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:24.068438 | orchestrator | 2025-09-19 17:04:24.068446 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-19 17:04:24.068454 | orchestrator | Friday 19 September 2025 17:01:55 +0000 (0:00:00.409) 0:00:22.387 ****** 2025-09-19 17:04:24.068461 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:24.068469 | orchestrator | 2025-09-19 17:04:24.068477 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-19 17:04:24.068485 | orchestrator | Friday 19 September 2025 17:01:56 +0000 (0:00:01.026) 0:00:23.414 ****** 2025-09-19 17:04:24.068493 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068500 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:24.068508 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:24.068516 | orchestrator | 2025-09-19 17:04:24.068524 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-19 17:04:24.068532 | orchestrator | Friday 19 September 2025 17:02:53 +0000 (0:00:57.230) 0:01:20.645 ****** 2025-09-19 17:04:24.068546 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068554 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:24.068562 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:24.068570 | orchestrator | 2025-09-19 17:04:24.068578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-19 17:04:24.068585 | orchestrator | Friday 19 September 2025 17:04:09 +0000 (0:01:15.608) 0:02:36.253 ****** 2025-09-19 17:04:24.068593 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:24.068601 | orchestrator | 2025-09-19 17:04:24.068609 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-19 17:04:24.068617 | orchestrator | Friday 19 September 2025 17:04:10 +0000 (0:00:00.539) 0:02:36.793 ****** 2025-09-19 17:04:24.068625 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:24.068632 | orchestrator | 2025-09-19 17:04:24.068640 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-19 17:04:24.068648 | orchestrator | Friday 19 September 2025 17:04:13 +0000 (0:00:03.113) 0:02:39.907 ****** 2025-09-19 17:04:24.068656 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:24.068663 | orchestrator | 2025-09-19 17:04:24.068671 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-19 17:04:24.068679 | orchestrator | Friday 19 September 2025 17:04:15 +0000 (0:00:02.522) 0:02:42.429 ****** 2025-09-19 17:04:24.068687 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068695 | orchestrator | 2025-09-19 17:04:24.068703 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-19 17:04:24.068710 | orchestrator | Friday 19 September 2025 17:04:18 +0000 (0:00:03.057) 0:02:45.487 ****** 2025-09-19 17:04:24.068718 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:24.068726 | orchestrator | 2025-09-19 17:04:24.068734 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:04:24.068743 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 17:04:24.068752 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 17:04:24.068760 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 17:04:24.068768 | orchestrator | 2025-09-19 17:04:24.068776 | orchestrator | 2025-09-19 17:04:24.068783 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:04:24.068797 | orchestrator | Friday 19 September 2025 17:04:21 +0000 (0:00:02.714) 0:02:48.201 ****** 2025-09-19 17:04:24.068805 | orchestrator | =============================================================================== 2025-09-19 17:04:24.068812 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.61s 2025-09-19 17:04:24.068820 | orchestrator | opensearch : Restart opensearch container ------------------------------ 57.23s 2025-09-19 17:04:24.068828 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.11s 2025-09-19 17:04:24.068836 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.06s 2025-09-19 17:04:24.068843 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.06s 2025-09-19 17:04:24.068875 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.93s 2025-09-19 17:04:24.068884 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.71s 2025-09-19 17:04:24.068892 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.61s 2025-09-19 17:04:24.068900 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.56s 2025-09-19 17:04:24.068908 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.52s 2025-09-19 17:04:24.068916 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.14s 2025-09-19 17:04:24.068929 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.80s 2025-09-19 17:04:24.068937 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.63s 2025-09-19 17:04:24.068949 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.59s 2025-09-19 17:04:24.068957 | orchestrator | opensearch : Perform a flush -------------------------------------------- 1.03s 2025-09-19 17:04:24.068965 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.92s 2025-09-19 17:04:24.068973 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-19 17:04:24.068981 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-09-19 17:04:24.068989 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-09-19 17:04:24.068996 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2025-09-19 17:04:27.114807 | orchestrator | 2025-09-19 17:04:27 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:27.116706 | orchestrator | 2025-09-19 17:04:27 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:27.117200 | orchestrator | 2025-09-19 17:04:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:30.168766 | orchestrator | 2025-09-19 17:04:30 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:30.172123 | orchestrator | 2025-09-19 17:04:30 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:30.172158 | orchestrator | 2025-09-19 17:04:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:33.215365 | orchestrator | 2025-09-19 17:04:33 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:33.218194 | orchestrator | 2025-09-19 17:04:33 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:33.218239 | orchestrator | 2025-09-19 17:04:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:36.262917 | orchestrator | 2025-09-19 17:04:36 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:36.264888 | orchestrator | 2025-09-19 17:04:36 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:36.264903 | orchestrator | 2025-09-19 17:04:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:39.313646 | orchestrator | 2025-09-19 17:04:39 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:39.315343 | orchestrator | 2025-09-19 17:04:39 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:39.315764 | orchestrator | 2025-09-19 17:04:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:42.370402 | orchestrator | 2025-09-19 17:04:42 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:42.372618 | orchestrator | 2025-09-19 17:04:42 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state STARTED 2025-09-19 17:04:42.372654 | orchestrator | 2025-09-19 17:04:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:45.428553 | orchestrator | 2025-09-19 17:04:45.428638 | orchestrator | 2025-09-19 17:04:45.428652 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-19 17:04:45.428664 | orchestrator | 2025-09-19 17:04:45.428676 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-19 17:04:45.428687 | orchestrator | Friday 19 September 2025 17:01:33 +0000 (0:00:00.105) 0:00:00.105 ****** 2025-09-19 17:04:45.428698 | orchestrator | ok: [localhost] => { 2025-09-19 17:04:45.428711 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-19 17:04:45.428748 | orchestrator | } 2025-09-19 17:04:45.428760 | orchestrator | 2025-09-19 17:04:45.428771 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-19 17:04:45.428782 | orchestrator | Friday 19 September 2025 17:01:33 +0000 (0:00:00.044) 0:00:00.150 ****** 2025-09-19 17:04:45.428793 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-19 17:04:45.428805 | orchestrator | ...ignoring 2025-09-19 17:04:45.428816 | orchestrator | 2025-09-19 17:04:45.428827 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-19 17:04:45.428838 | orchestrator | Friday 19 September 2025 17:01:36 +0000 (0:00:02.829) 0:00:02.980 ****** 2025-09-19 17:04:45.428849 | orchestrator | skipping: [localhost] 2025-09-19 17:04:45.428903 | orchestrator | 2025-09-19 17:04:45.428914 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-19 17:04:45.428926 | orchestrator | Friday 19 September 2025 17:01:36 +0000 (0:00:00.060) 0:00:03.040 ****** 2025-09-19 17:04:45.428936 | orchestrator | ok: [localhost] 2025-09-19 17:04:45.428947 | orchestrator | 2025-09-19 17:04:45.428958 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:04:45.428969 | orchestrator | 2025-09-19 17:04:45.428980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:04:45.428991 | orchestrator | Friday 19 September 2025 17:01:36 +0000 (0:00:00.185) 0:00:03.226 ****** 2025-09-19 17:04:45.429002 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.429013 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.429023 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.429034 | orchestrator | 2025-09-19 17:04:45.429059 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:04:45.429070 | orchestrator | Friday 19 September 2025 17:01:37 +0000 (0:00:00.512) 0:00:03.738 ****** 2025-09-19 17:04:45.429081 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 17:04:45.429092 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 17:04:45.429103 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 17:04:45.429113 | orchestrator | 2025-09-19 17:04:45.429124 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 17:04:45.429135 | orchestrator | 2025-09-19 17:04:45.429145 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 17:04:45.429156 | orchestrator | Friday 19 September 2025 17:01:37 +0000 (0:00:00.513) 0:00:04.251 ****** 2025-09-19 17:04:45.429167 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 17:04:45.429178 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 17:04:45.429188 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 17:04:45.429199 | orchestrator | 2025-09-19 17:04:45.429210 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 17:04:45.429221 | orchestrator | Friday 19 September 2025 17:01:38 +0000 (0:00:00.406) 0:00:04.658 ****** 2025-09-19 17:04:45.429231 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:45.429243 | orchestrator | 2025-09-19 17:04:45.429254 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-19 17:04:45.429265 | orchestrator | Friday 19 September 2025 17:01:38 +0000 (0:00:00.515) 0:00:05.173 ****** 2025-09-19 17:04:45.429300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429376 | orchestrator | 2025-09-19 17:04:45.429394 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-19 17:04:45.429406 | orchestrator | Friday 19 September 2025 17:01:42 +0000 (0:00:03.537) 0:00:08.711 ****** 2025-09-19 17:04:45.429417 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.429429 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.429440 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.429451 | orchestrator | 2025-09-19 17:04:45.429461 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-19 17:04:45.429472 | orchestrator | Friday 19 September 2025 17:01:43 +0000 (0:00:00.769) 0:00:09.481 ****** 2025-09-19 17:04:45.429483 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.429494 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.429505 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.429515 | orchestrator | 2025-09-19 17:04:45.429526 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-19 17:04:45.429537 | orchestrator | Friday 19 September 2025 17:01:44 +0000 (0:00:01.784) 0:00:11.266 ****** 2025-09-19 17:04:45.429554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.429610 | orchestrator | 2025-09-19 17:04:45.429621 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-19 17:04:45.429632 | orchestrator | Friday 19 September 2025 17:01:48 +0000 (0:00:03.455) 0:00:14.722 ****** 2025-09-19 17:04:45.429643 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.429654 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.429664 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.429675 | orchestrator | 2025-09-19 17:04:45.429686 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-19 17:04:45.429704 | orchestrator | Friday 19 September 2025 17:01:49 +0000 (0:00:01.210) 0:00:15.932 ****** 2025-09-19 17:04:45.429715 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.429726 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:45.429736 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:45.429747 | orchestrator | 2025-09-19 17:04:45.429758 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 17:04:45.429769 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:04.806) 0:00:20.739 ****** 2025-09-19 17:04:45.429780 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:45.429790 | orchestrator | 2025-09-19 17:04:45.429801 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-19 17:04:45.429812 | orchestrator | Friday 19 September 2025 17:01:54 +0000 (0:00:00.521) 0:00:21.261 ****** 2025-09-19 17:04:45.429832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.429845 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.429893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.429913 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.429932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.429945 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.429955 | orchestrator | 2025-09-19 17:04:45.429966 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-19 17:04:45.429977 | orchestrator | Friday 19 September 2025 17:01:58 +0000 (0:00:03.765) 0:00:25.026 ****** 2025-09-19 17:04:45.429993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430011 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.430072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430085 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.430102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430126 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.430138 | orchestrator | 2025-09-19 17:04:45.430149 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-19 17:04:45.430160 | orchestrator | Friday 19 September 2025 17:02:01 +0000 (0:00:02.632) 0:00:27.659 ****** 2025-09-19 17:04:45.430171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430183 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.430209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430227 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.430239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-19 17:04:45.430251 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.430262 | orchestrator | 2025-09-19 17:04:45.430272 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-19 17:04:45.430283 | orchestrator | Friday 19 September 2025 17:02:04 +0000 (0:00:02.786) 0:00:30.446 ****** 2025-09-19 17:04:45.430300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/l2025-09-19 17:04:45 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:45.430313 | orchestrator | 2025-09-19 17:04:45 | INFO  | Task a3d8f9c0-2c97-4fb2-ba2c-00bcb9887708 is in state SUCCESS 2025-09-19 17:04:45.430324 | orchestrator | 2025-09-19 17:04:45 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:04:45.430341 | orchestrator | ocaltime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.430360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.430381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-19 17:04:45.430401 | orchestrator | 2025-09-19 17:04:45.430412 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-19 17:04:45.430427 | orchestrator | Friday 19 September 2025 17:02:07 +0000 (0:00:03.092) 0:00:33.539 ****** 2025-09-19 17:04:45.430438 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.430449 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:45.430460 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:45.430470 | orchestrator | 2025-09-19 17:04:45.430481 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-19 17:04:45.430492 | orchestrator | Friday 19 September 2025 17:02:07 +0000 (0:00:00.850) 0:00:34.389 ****** 2025-09-19 17:04:45.430502 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.430513 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.430524 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.430535 | orchestrator | 2025-09-19 17:04:45.430546 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-19 17:04:45.430557 | orchestrator | Friday 19 September 2025 17:02:08 +0000 (0:00:00.665) 0:00:35.055 ****** 2025-09-19 17:04:45.430567 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.430578 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.430589 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.430599 | orchestrator | 2025-09-19 17:04:45.430610 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-19 17:04:45.430621 | orchestrator | Friday 19 September 2025 17:02:08 +0000 (0:00:00.318) 0:00:35.374 ****** 2025-09-19 17:04:45.430633 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-19 17:04:45.430644 | orchestrator | ...ignoring 2025-09-19 17:04:45.430656 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-19 17:04:45.430666 | orchestrator | ...ignoring 2025-09-19 17:04:45.430677 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-19 17:04:45.430688 | orchestrator | ...ignoring 2025-09-19 17:04:45.430699 | orchestrator | 2025-09-19 17:04:45.430710 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-19 17:04:45.430720 | orchestrator | Friday 19 September 2025 17:02:19 +0000 (0:00:10.913) 0:00:46.288 ****** 2025-09-19 17:04:45.430731 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.430742 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.430752 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.430763 | orchestrator | 2025-09-19 17:04:45.430774 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-19 17:04:45.430784 | orchestrator | Friday 19 September 2025 17:02:20 +0000 (0:00:00.442) 0:00:46.730 ****** 2025-09-19 17:04:45.430795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.430806 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.430816 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.430827 | orchestrator | 2025-09-19 17:04:45.430838 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-19 17:04:45.430849 | orchestrator | Friday 19 September 2025 17:02:20 +0000 (0:00:00.629) 0:00:47.359 ****** 2025-09-19 17:04:45.430914 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.430926 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.430937 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.430947 | orchestrator | 2025-09-19 17:04:45.430958 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-19 17:04:45.430976 | orchestrator | Friday 19 September 2025 17:02:21 +0000 (0:00:00.441) 0:00:47.801 ****** 2025-09-19 17:04:45.430987 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.430998 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.431008 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.431019 | orchestrator | 2025-09-19 17:04:45.431030 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-19 17:04:45.431046 | orchestrator | Friday 19 September 2025 17:02:21 +0000 (0:00:00.432) 0:00:48.234 ****** 2025-09-19 17:04:45.431058 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.431069 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.431079 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.431090 | orchestrator | 2025-09-19 17:04:45.431101 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-19 17:04:45.431112 | orchestrator | Friday 19 September 2025 17:02:22 +0000 (0:00:00.406) 0:00:48.641 ****** 2025-09-19 17:04:45.431123 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.431134 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.431145 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.431156 | orchestrator | 2025-09-19 17:04:45.431166 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 17:04:45.431177 | orchestrator | Friday 19 September 2025 17:02:23 +0000 (0:00:00.852) 0:00:49.494 ****** 2025-09-19 17:04:45.431188 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.431199 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.431210 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-19 17:04:45.431221 | orchestrator | 2025-09-19 17:04:45.431232 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-19 17:04:45.431243 | orchestrator | Friday 19 September 2025 17:02:23 +0000 (0:00:00.390) 0:00:49.884 ****** 2025-09-19 17:04:45.431254 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.431265 | orchestrator | 2025-09-19 17:04:45.431276 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-19 17:04:45.431287 | orchestrator | Friday 19 September 2025 17:02:33 +0000 (0:00:09.830) 0:00:59.714 ****** 2025-09-19 17:04:45.431297 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.431308 | orchestrator | 2025-09-19 17:04:45.431319 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 17:04:45.431330 | orchestrator | Friday 19 September 2025 17:02:33 +0000 (0:00:00.118) 0:00:59.833 ****** 2025-09-19 17:04:45.431341 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.431352 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.431362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.431373 | orchestrator | 2025-09-19 17:04:45.431384 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-19 17:04:45.431400 | orchestrator | Friday 19 September 2025 17:02:34 +0000 (0:00:00.983) 0:01:00.816 ****** 2025-09-19 17:04:45.431411 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.431422 | orchestrator | 2025-09-19 17:04:45.431432 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-19 17:04:45.431441 | orchestrator | Friday 19 September 2025 17:02:42 +0000 (0:00:07.814) 0:01:08.631 ****** 2025-09-19 17:04:45.431451 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.431461 | orchestrator | 2025-09-19 17:04:45.431471 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-19 17:04:45.431480 | orchestrator | Friday 19 September 2025 17:02:43 +0000 (0:00:01.580) 0:01:10.211 ****** 2025-09-19 17:04:45.431490 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.431500 | orchestrator | 2025-09-19 17:04:45.431509 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-19 17:04:45.431519 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:02.551) 0:01:12.763 ****** 2025-09-19 17:04:45.431529 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.431544 | orchestrator | 2025-09-19 17:04:45.431554 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-19 17:04:45.431563 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:00.130) 0:01:12.893 ****** 2025-09-19 17:04:45.431573 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.431583 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.431592 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.431602 | orchestrator | 2025-09-19 17:04:45.431611 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-19 17:04:45.431621 | orchestrator | Friday 19 September 2025 17:02:46 +0000 (0:00:00.329) 0:01:13.223 ****** 2025-09-19 17:04:45.431630 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.431640 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 17:04:45.431650 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:45.431659 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:45.431669 | orchestrator | 2025-09-19 17:04:45.431678 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 17:04:45.431688 | orchestrator | skipping: no hosts matched 2025-09-19 17:04:45.431697 | orchestrator | 2025-09-19 17:04:45.431707 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 17:04:45.431717 | orchestrator | 2025-09-19 17:04:45.431726 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 17:04:45.431736 | orchestrator | Friday 19 September 2025 17:02:47 +0000 (0:00:00.526) 0:01:13.750 ****** 2025-09-19 17:04:45.431745 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:04:45.431755 | orchestrator | 2025-09-19 17:04:45.431764 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 17:04:45.431774 | orchestrator | Friday 19 September 2025 17:03:05 +0000 (0:00:18.518) 0:01:32.268 ****** 2025-09-19 17:04:45.431783 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.431793 | orchestrator | 2025-09-19 17:04:45.431803 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 17:04:45.431812 | orchestrator | Friday 19 September 2025 17:03:26 +0000 (0:00:20.601) 0:01:52.869 ****** 2025-09-19 17:04:45.431822 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.431831 | orchestrator | 2025-09-19 17:04:45.431841 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 17:04:45.431850 | orchestrator | 2025-09-19 17:04:45.431875 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 17:04:45.431885 | orchestrator | Friday 19 September 2025 17:03:28 +0000 (0:00:02.329) 0:01:55.199 ****** 2025-09-19 17:04:45.431894 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:04:45.431904 | orchestrator | 2025-09-19 17:04:45.431914 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 17:04:45.431929 | orchestrator | Friday 19 September 2025 17:03:47 +0000 (0:00:18.981) 0:02:14.181 ****** 2025-09-19 17:04:45.431939 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.431949 | orchestrator | 2025-09-19 17:04:45.431958 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 17:04:45.431968 | orchestrator | Friday 19 September 2025 17:04:08 +0000 (0:00:20.625) 0:02:34.806 ****** 2025-09-19 17:04:45.431978 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.431987 | orchestrator | 2025-09-19 17:04:45.431997 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 17:04:45.432007 | orchestrator | 2025-09-19 17:04:45.432017 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-19 17:04:45.432026 | orchestrator | Friday 19 September 2025 17:04:10 +0000 (0:00:02.586) 0:02:37.393 ****** 2025-09-19 17:04:45.432036 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.432046 | orchestrator | 2025-09-19 17:04:45.432055 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-19 17:04:45.432065 | orchestrator | Friday 19 September 2025 17:04:23 +0000 (0:00:12.099) 0:02:49.492 ****** 2025-09-19 17:04:45.432081 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.432090 | orchestrator | 2025-09-19 17:04:45.432100 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-19 17:04:45.432109 | orchestrator | Friday 19 September 2025 17:04:27 +0000 (0:00:04.576) 0:02:54.069 ****** 2025-09-19 17:04:45.432119 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.432129 | orchestrator | 2025-09-19 17:04:45.432138 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 17:04:45.432148 | orchestrator | 2025-09-19 17:04:45.432157 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 17:04:45.432167 | orchestrator | Friday 19 September 2025 17:04:30 +0000 (0:00:02.607) 0:02:56.677 ****** 2025-09-19 17:04:45.432176 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:04:45.432186 | orchestrator | 2025-09-19 17:04:45.432196 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-19 17:04:45.432205 | orchestrator | Friday 19 September 2025 17:04:30 +0000 (0:00:00.524) 0:02:57.202 ****** 2025-09-19 17:04:45.432215 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.432225 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.432234 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.432244 | orchestrator | 2025-09-19 17:04:45.432258 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-19 17:04:45.432268 | orchestrator | Friday 19 September 2025 17:04:33 +0000 (0:00:02.310) 0:02:59.512 ****** 2025-09-19 17:04:45.432278 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.432287 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.432297 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.432306 | orchestrator | 2025-09-19 17:04:45.432316 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-19 17:04:45.432325 | orchestrator | Friday 19 September 2025 17:04:35 +0000 (0:00:02.370) 0:03:01.883 ****** 2025-09-19 17:04:45.432335 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.432345 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.432354 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.432364 | orchestrator | 2025-09-19 17:04:45.432374 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-19 17:04:45.432383 | orchestrator | Friday 19 September 2025 17:04:37 +0000 (0:00:02.267) 0:03:04.151 ****** 2025-09-19 17:04:45.432393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.432402 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.432412 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:04:45.432421 | orchestrator | 2025-09-19 17:04:45.432431 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-19 17:04:45.432441 | orchestrator | Friday 19 September 2025 17:04:39 +0000 (0:00:02.205) 0:03:06.356 ****** 2025-09-19 17:04:45.432450 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:04:45.432460 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:04:45.432470 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:04:45.432479 | orchestrator | 2025-09-19 17:04:45.432489 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 17:04:45.432498 | orchestrator | Friday 19 September 2025 17:04:42 +0000 (0:00:02.980) 0:03:09.336 ****** 2025-09-19 17:04:45.432508 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:04:45.432517 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:04:45.432527 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:04:45.432537 | orchestrator | 2025-09-19 17:04:45.432546 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:04:45.432556 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-19 17:04:45.432566 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-19 17:04:45.432583 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 17:04:45.432593 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-19 17:04:45.432602 | orchestrator | 2025-09-19 17:04:45.432612 | orchestrator | 2025-09-19 17:04:45.432622 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:04:45.432632 | orchestrator | Friday 19 September 2025 17:04:43 +0000 (0:00:00.326) 0:03:09.663 ****** 2025-09-19 17:04:45.432641 | orchestrator | =============================================================================== 2025-09-19 17:04:45.432651 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.23s 2025-09-19 17:04:45.432661 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.50s 2025-09-19 17:04:45.432676 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.10s 2025-09-19 17:04:45.432685 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.91s 2025-09-19 17:04:45.432695 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.83s 2025-09-19 17:04:45.432704 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.81s 2025-09-19 17:04:45.432714 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.92s 2025-09-19 17:04:45.432723 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.81s 2025-09-19 17:04:45.432733 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2025-09-19 17:04:45.432742 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.77s 2025-09-19 17:04:45.432752 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.54s 2025-09-19 17:04:45.432762 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.46s 2025-09-19 17:04:45.432771 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.09s 2025-09-19 17:04:45.432781 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.98s 2025-09-19 17:04:45.432790 | orchestrator | Check MariaDB service --------------------------------------------------- 2.83s 2025-09-19 17:04:45.432800 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.79s 2025-09-19 17:04:45.432810 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.63s 2025-09-19 17:04:45.432819 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.61s 2025-09-19 17:04:45.432829 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.55s 2025-09-19 17:04:45.432839 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.37s 2025-09-19 17:04:45.432848 | orchestrator | 2025-09-19 17:04:45 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:04:45.432902 | orchestrator | 2025-09-19 17:04:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:48.467113 | orchestrator | 2025-09-19 17:04:48 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:48.467197 | orchestrator | 2025-09-19 17:04:48 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:04:48.467581 | orchestrator | 2025-09-19 17:04:48 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:04:48.467608 | orchestrator | 2025-09-19 17:04:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:51.500675 | orchestrator | 2025-09-19 17:04:51 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:51.501226 | orchestrator | 2025-09-19 17:04:51 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:04:51.502168 | orchestrator | 2025-09-19 17:04:51 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:04:51.502194 | orchestrator | 2025-09-19 17:04:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:54.553399 | orchestrator | 2025-09-19 17:04:54 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:54.555143 | orchestrator | 2025-09-19 17:04:54 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:04:54.557118 | orchestrator | 2025-09-19 17:04:54 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:04:54.557177 | orchestrator | 2025-09-19 17:04:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:04:57.587997 | orchestrator | 2025-09-19 17:04:57 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:04:57.590232 | orchestrator | 2025-09-19 17:04:57 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:04:57.591991 | orchestrator | 2025-09-19 17:04:57 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:04:57.592453 | orchestrator | 2025-09-19 17:04:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:00.628116 | orchestrator | 2025-09-19 17:05:00 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:00.628690 | orchestrator | 2025-09-19 17:05:00 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:00.629997 | orchestrator | 2025-09-19 17:05:00 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:00.630138 | orchestrator | 2025-09-19 17:05:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:03.665746 | orchestrator | 2025-09-19 17:05:03 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:03.666328 | orchestrator | 2025-09-19 17:05:03 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:03.668413 | orchestrator | 2025-09-19 17:05:03 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:03.668441 | orchestrator | 2025-09-19 17:05:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:06.698378 | orchestrator | 2025-09-19 17:05:06 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:06.699570 | orchestrator | 2025-09-19 17:05:06 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:06.702063 | orchestrator | 2025-09-19 17:05:06 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:06.702093 | orchestrator | 2025-09-19 17:05:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:09.747994 | orchestrator | 2025-09-19 17:05:09 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:09.748062 | orchestrator | 2025-09-19 17:05:09 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:09.748073 | orchestrator | 2025-09-19 17:05:09 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:09.748127 | orchestrator | 2025-09-19 17:05:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:12.780761 | orchestrator | 2025-09-19 17:05:12 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:12.781660 | orchestrator | 2025-09-19 17:05:12 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:12.782883 | orchestrator | 2025-09-19 17:05:12 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:12.782954 | orchestrator | 2025-09-19 17:05:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:15.829523 | orchestrator | 2025-09-19 17:05:15 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:15.831088 | orchestrator | 2025-09-19 17:05:15 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:15.832951 | orchestrator | 2025-09-19 17:05:15 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:15.833020 | orchestrator | 2025-09-19 17:05:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:18.875260 | orchestrator | 2025-09-19 17:05:18 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:18.877274 | orchestrator | 2025-09-19 17:05:18 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:18.879463 | orchestrator | 2025-09-19 17:05:18 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:18.880179 | orchestrator | 2025-09-19 17:05:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:21.924233 | orchestrator | 2025-09-19 17:05:21 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:21.926199 | orchestrator | 2025-09-19 17:05:21 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:21.928987 | orchestrator | 2025-09-19 17:05:21 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:21.929060 | orchestrator | 2025-09-19 17:05:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:24.972751 | orchestrator | 2025-09-19 17:05:24 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:24.974159 | orchestrator | 2025-09-19 17:05:24 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:24.974955 | orchestrator | 2025-09-19 17:05:24 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:24.974988 | orchestrator | 2025-09-19 17:05:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:28.020408 | orchestrator | 2025-09-19 17:05:28 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:28.023385 | orchestrator | 2025-09-19 17:05:28 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:28.027161 | orchestrator | 2025-09-19 17:05:28 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:28.027220 | orchestrator | 2025-09-19 17:05:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:31.067406 | orchestrator | 2025-09-19 17:05:31 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:31.069685 | orchestrator | 2025-09-19 17:05:31 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:31.072114 | orchestrator | 2025-09-19 17:05:31 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:31.072200 | orchestrator | 2025-09-19 17:05:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:34.116508 | orchestrator | 2025-09-19 17:05:34 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:34.118114 | orchestrator | 2025-09-19 17:05:34 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:34.119803 | orchestrator | 2025-09-19 17:05:34 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:34.119896 | orchestrator | 2025-09-19 17:05:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:37.160003 | orchestrator | 2025-09-19 17:05:37 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:37.162264 | orchestrator | 2025-09-19 17:05:37 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:37.164211 | orchestrator | 2025-09-19 17:05:37 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:37.164815 | orchestrator | 2025-09-19 17:05:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:40.203484 | orchestrator | 2025-09-19 17:05:40 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:40.204160 | orchestrator | 2025-09-19 17:05:40 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:40.205592 | orchestrator | 2025-09-19 17:05:40 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:40.205620 | orchestrator | 2025-09-19 17:05:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:43.261947 | orchestrator | 2025-09-19 17:05:43 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:43.264380 | orchestrator | 2025-09-19 17:05:43 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:43.266589 | orchestrator | 2025-09-19 17:05:43 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:43.266737 | orchestrator | 2025-09-19 17:05:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:46.307784 | orchestrator | 2025-09-19 17:05:46 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:46.308515 | orchestrator | 2025-09-19 17:05:46 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:46.309909 | orchestrator | 2025-09-19 17:05:46 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:46.309937 | orchestrator | 2025-09-19 17:05:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:49.348687 | orchestrator | 2025-09-19 17:05:49 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:49.349633 | orchestrator | 2025-09-19 17:05:49 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:49.350841 | orchestrator | 2025-09-19 17:05:49 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:49.351031 | orchestrator | 2025-09-19 17:05:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:52.397517 | orchestrator | 2025-09-19 17:05:52 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:52.398703 | orchestrator | 2025-09-19 17:05:52 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:52.401566 | orchestrator | 2025-09-19 17:05:52 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:52.401609 | orchestrator | 2025-09-19 17:05:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:55.447045 | orchestrator | 2025-09-19 17:05:55 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:55.448341 | orchestrator | 2025-09-19 17:05:55 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:55.449394 | orchestrator | 2025-09-19 17:05:55 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:55.449469 | orchestrator | 2025-09-19 17:05:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:05:58.493382 | orchestrator | 2025-09-19 17:05:58 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:05:58.495838 | orchestrator | 2025-09-19 17:05:58 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:05:58.498002 | orchestrator | 2025-09-19 17:05:58 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:05:58.498311 | orchestrator | 2025-09-19 17:05:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:01.532782 | orchestrator | 2025-09-19 17:06:01 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:06:01.534225 | orchestrator | 2025-09-19 17:06:01 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:01.536358 | orchestrator | 2025-09-19 17:06:01 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:01.536382 | orchestrator | 2025-09-19 17:06:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:04.571233 | orchestrator | 2025-09-19 17:06:04 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state STARTED 2025-09-19 17:06:04.573140 | orchestrator | 2025-09-19 17:06:04 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:04.574100 | orchestrator | 2025-09-19 17:06:04 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:04.574211 | orchestrator | 2025-09-19 17:06:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:07.630623 | orchestrator | 2025-09-19 17:06:07.630730 | orchestrator | 2025-09-19 17:06:07.630746 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-19 17:06:07.630760 | orchestrator | 2025-09-19 17:06:07.630771 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-19 17:06:07.630783 | orchestrator | Friday 19 September 2025 17:03:53 +0000 (0:00:00.511) 0:00:00.511 ****** 2025-09-19 17:06:07.630795 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:06:07.630807 | orchestrator | 2025-09-19 17:06:07.630835 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-19 17:06:07.631170 | orchestrator | Friday 19 September 2025 17:03:53 +0000 (0:00:00.472) 0:00:00.984 ****** 2025-09-19 17:06:07.631190 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631205 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631218 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631231 | orchestrator | 2025-09-19 17:06:07.631246 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-19 17:06:07.631267 | orchestrator | Friday 19 September 2025 17:03:54 +0000 (0:00:00.669) 0:00:01.654 ****** 2025-09-19 17:06:07.631287 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631305 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631323 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631343 | orchestrator | 2025-09-19 17:06:07.631363 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-19 17:06:07.631382 | orchestrator | Friday 19 September 2025 17:03:54 +0000 (0:00:00.274) 0:00:01.928 ****** 2025-09-19 17:06:07.631395 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631405 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631416 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631427 | orchestrator | 2025-09-19 17:06:07.631438 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-19 17:06:07.631449 | orchestrator | Friday 19 September 2025 17:03:55 +0000 (0:00:00.718) 0:00:02.647 ****** 2025-09-19 17:06:07.631460 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631592 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631604 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631615 | orchestrator | 2025-09-19 17:06:07.631626 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-19 17:06:07.631661 | orchestrator | Friday 19 September 2025 17:03:55 +0000 (0:00:00.280) 0:00:02.928 ****** 2025-09-19 17:06:07.631672 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631683 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631694 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631705 | orchestrator | 2025-09-19 17:06:07.631716 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-19 17:06:07.631727 | orchestrator | Friday 19 September 2025 17:03:56 +0000 (0:00:00.278) 0:00:03.207 ****** 2025-09-19 17:06:07.631738 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631748 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631759 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631770 | orchestrator | 2025-09-19 17:06:07.631781 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-19 17:06:07.631792 | orchestrator | Friday 19 September 2025 17:03:56 +0000 (0:00:00.271) 0:00:03.478 ****** 2025-09-19 17:06:07.631803 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.631814 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.631825 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.631836 | orchestrator | 2025-09-19 17:06:07.631847 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-19 17:06:07.631857 | orchestrator | Friday 19 September 2025 17:03:56 +0000 (0:00:00.385) 0:00:03.863 ****** 2025-09-19 17:06:07.631894 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.631907 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.631918 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.631929 | orchestrator | 2025-09-19 17:06:07.631976 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-19 17:06:07.631989 | orchestrator | Friday 19 September 2025 17:03:56 +0000 (0:00:00.260) 0:00:04.124 ****** 2025-09-19 17:06:07.632000 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:06:07.632011 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:06:07.632022 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:06:07.632033 | orchestrator | 2025-09-19 17:06:07.632044 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-19 17:06:07.632055 | orchestrator | Friday 19 September 2025 17:03:57 +0000 (0:00:00.610) 0:00:04.735 ****** 2025-09-19 17:06:07.632066 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.632077 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.632087 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.632099 | orchestrator | 2025-09-19 17:06:07.632109 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-19 17:06:07.632120 | orchestrator | Friday 19 September 2025 17:03:57 +0000 (0:00:00.368) 0:00:05.103 ****** 2025-09-19 17:06:07.632131 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:06:07.632142 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:06:07.632153 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:06:07.632164 | orchestrator | 2025-09-19 17:06:07.632175 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-19 17:06:07.632186 | orchestrator | Friday 19 September 2025 17:04:00 +0000 (0:00:02.084) 0:00:07.187 ****** 2025-09-19 17:06:07.632196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 17:06:07.632208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 17:06:07.632219 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 17:06:07.632229 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632240 | orchestrator | 2025-09-19 17:06:07.632259 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-19 17:06:07.632315 | orchestrator | Friday 19 September 2025 17:04:00 +0000 (0:00:00.407) 0:00:07.595 ****** 2025-09-19 17:06:07.632339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632388 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632410 | orchestrator | 2025-09-19 17:06:07.632422 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-19 17:06:07.632432 | orchestrator | Friday 19 September 2025 17:04:01 +0000 (0:00:00.773) 0:00:08.369 ****** 2025-09-19 17:06:07.632446 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632460 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632471 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.632482 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632493 | orchestrator | 2025-09-19 17:06:07.632504 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-19 17:06:07.632515 | orchestrator | Friday 19 September 2025 17:04:01 +0000 (0:00:00.159) 0:00:08.528 ****** 2025-09-19 17:06:07.632528 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8d5220122f3a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-19 17:03:58.578248', 'end': '2025-09-19 17:03:58.627720', 'delta': '0:00:00.049472', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8d5220122f3a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-19 17:06:07.632542 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'a3ca191c43a0', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-19 17:03:59.317239', 'end': '2025-09-19 17:03:59.357699', 'delta': '0:00:00.040460', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['a3ca191c43a0'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-19 17:06:07.632576 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cb6e046541cb', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-19 17:03:59.849593', 'end': '2025-09-19 17:03:59.892591', 'delta': '0:00:00.042998', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cb6e046541cb'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-19 17:06:07.632588 | orchestrator | 2025-09-19 17:06:07.632599 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-19 17:06:07.632610 | orchestrator | Friday 19 September 2025 17:04:01 +0000 (0:00:00.368) 0:00:08.896 ****** 2025-09-19 17:06:07.632621 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.632632 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.632642 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.632653 | orchestrator | 2025-09-19 17:06:07.632664 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-19 17:06:07.632675 | orchestrator | Friday 19 September 2025 17:04:02 +0000 (0:00:00.468) 0:00:09.365 ****** 2025-09-19 17:06:07.632685 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-19 17:06:07.632696 | orchestrator | 2025-09-19 17:06:07.632707 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-19 17:06:07.632718 | orchestrator | Friday 19 September 2025 17:04:04 +0000 (0:00:02.148) 0:00:11.514 ****** 2025-09-19 17:06:07.632728 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632739 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.632750 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.632761 | orchestrator | 2025-09-19 17:06:07.632772 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-19 17:06:07.632782 | orchestrator | Friday 19 September 2025 17:04:04 +0000 (0:00:00.286) 0:00:11.800 ****** 2025-09-19 17:06:07.632793 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632804 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.632814 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.632825 | orchestrator | 2025-09-19 17:06:07.632836 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 17:06:07.632847 | orchestrator | Friday 19 September 2025 17:04:05 +0000 (0:00:00.404) 0:00:12.204 ****** 2025-09-19 17:06:07.632857 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.632897 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.632910 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.632930 | orchestrator | 2025-09-19 17:06:07.632942 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-19 17:06:07.632953 | orchestrator | Friday 19 September 2025 17:04:05 +0000 (0:00:00.486) 0:00:12.691 ****** 2025-09-19 17:06:07.632964 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.632974 | orchestrator | 2025-09-19 17:06:07.632986 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-19 17:06:07.632997 | orchestrator | Friday 19 September 2025 17:04:05 +0000 (0:00:00.127) 0:00:12.818 ****** 2025-09-19 17:06:07.633007 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633018 | orchestrator | 2025-09-19 17:06:07.633029 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-19 17:06:07.633040 | orchestrator | Friday 19 September 2025 17:04:05 +0000 (0:00:00.228) 0:00:13.047 ****** 2025-09-19 17:06:07.633058 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633069 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633080 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633090 | orchestrator | 2025-09-19 17:06:07.633101 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-19 17:06:07.633112 | orchestrator | Friday 19 September 2025 17:04:06 +0000 (0:00:00.279) 0:00:13.326 ****** 2025-09-19 17:06:07.633123 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633134 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633145 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633155 | orchestrator | 2025-09-19 17:06:07.633167 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-19 17:06:07.633178 | orchestrator | Friday 19 September 2025 17:04:06 +0000 (0:00:00.308) 0:00:13.634 ****** 2025-09-19 17:06:07.633189 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633200 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633211 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633221 | orchestrator | 2025-09-19 17:06:07.633232 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-19 17:06:07.633247 | orchestrator | Friday 19 September 2025 17:04:06 +0000 (0:00:00.515) 0:00:14.150 ****** 2025-09-19 17:06:07.633267 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633286 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633305 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633326 | orchestrator | 2025-09-19 17:06:07.633386 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-19 17:06:07.633401 | orchestrator | Friday 19 September 2025 17:04:07 +0000 (0:00:00.325) 0:00:14.476 ****** 2025-09-19 17:06:07.633412 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633423 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633434 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633445 | orchestrator | 2025-09-19 17:06:07.633456 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-19 17:06:07.633467 | orchestrator | Friday 19 September 2025 17:04:07 +0000 (0:00:00.302) 0:00:14.778 ****** 2025-09-19 17:06:07.633477 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633488 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633499 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633510 | orchestrator | 2025-09-19 17:06:07.633521 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-19 17:06:07.633540 | orchestrator | Friday 19 September 2025 17:04:07 +0000 (0:00:00.323) 0:00:15.101 ****** 2025-09-19 17:06:07.633551 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.633562 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.633573 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.633583 | orchestrator | 2025-09-19 17:06:07.633619 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-19 17:06:07.633632 | orchestrator | Friday 19 September 2025 17:04:08 +0000 (0:00:00.525) 0:00:15.627 ****** 2025-09-19 17:06:07.633650 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70', 'dm-uuid-LVM-Vg40vHetn4R56D6Ffi9uOciNR5oL0Yiiyh9RxQKWB6m8dBBSQ0ooWdkOFaYkE1WC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec', 'dm-uuid-LVM-noY6foXpitZX6cHQDHPdWcoWEE9GLEeGpeaCHFyUXan2usFgI5rj3Wakp48dwX55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633697 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633731 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633777 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.633813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c9QEt0-HRgd-bY03-Jd9F-51yF-8rcZ-PPDFR4', 'scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd', 'scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.633839 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0pOpv-Df9H-7sCV-gXAD-ztyf-iKEa-62mI36', 'scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363', 'scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.633851 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2', 'dm-uuid-LVM-QcDdh1J2jxOs6tp7Oe4XT0Zz1JSjF5dI8NXygTe7o6IzXrm3Ci2oWGBt6XdcCnD9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633896 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5', 'scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.633910 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7', 'dm-uuid-LVM-LDASMkfHr0khVCowCzXLcMatR1wSlg7UDRK7AXLK7sqvKBaj0TbHwVG9FYXRIj2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.633933 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633944 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.633990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.634013 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.634124 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.634137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.634148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.634170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.634190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GbVk8X-bjpf-wsn1-v0bH-HW56-9ucN-vMi0Ec', 'scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73', 'scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.634210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FyKxvb-dSUu-GGIj-2HDa-wyiz-cOn4-Bzoouk', 'scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd', 'scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.634222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3', 'scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.634233 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2', 'dm-uuid-LVM-4hDC3ozcjstwWQ3E5UxqBrjJp5mz1cIfsVn5PTVRwsj0jMyjGmhIMfAIPNf2GBTF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.634249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.634270 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.634299 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38', 'dm-uuid-LVM-bBiflqjnftduSHS5XiwByNmPAVGwW9bI5l8qlrblgYE7PdOCmSNNWyQdESxdCPSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sect2025-09-19 17:06:07 | INFO  | Task fa39db93-d11a-4281-8681-c5d1e32b1457 is in state SUCCESS 2025-09-19 17:06:07.634851 | orchestrator | orsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635020 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635052 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635075 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635108 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-19 17:06:07.635150 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.635176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WoixB1-M6Dg-l8nc-m7Vg-jCLx-Etkb-ybuhtE', 'scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e', 'scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.635189 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-prhbwg-ulIC-7M5H-Gfur-Z1ct-zcRp-etKAGn', 'scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1', 'scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.635201 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5', 'scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.635219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-19 17:06:07.635238 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.635251 | orchestrator | 2025-09-19 17:06:07.635268 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-19 17:06:07.635280 | orchestrator | Friday 19 September 2025 17:04:09 +0000 (0:00:00.583) 0:00:16.210 ****** 2025-09-19 17:06:07.635293 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70', 'dm-uuid-LVM-Vg40vHetn4R56D6Ffi9uOciNR5oL0Yiiyh9RxQKWB6m8dBBSQ0ooWdkOFaYkE1WC'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635305 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec', 'dm-uuid-LVM-noY6foXpitZX6cHQDHPdWcoWEE9GLEeGpeaCHFyUXan2usFgI5rj3Wakp48dwX55'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635370 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635383 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635394 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635405 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635416 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2', 'dm-uuid-LVM-QcDdh1J2jxOs6tp7Oe4XT0Zz1JSjF5dI8NXygTe7o6IzXrm3Ci2oWGBt6XdcCnD9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635428 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635452 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7', 'dm-uuid-LVM-LDASMkfHr0khVCowCzXLcMatR1wSlg7UDRK7AXLK7sqvKBaj0TbHwVG9FYXRIj2e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16', 'scsi-SQEMU_QEMU_HARDDISK_bdf17f48-750a-4da2-b9bc-22b260044989-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635484 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635496 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--502e1679--2b8a--59ad--b2cc--f53252d80a70-osd--block--502e1679--2b8a--59ad--b2cc--f53252d80a70'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-c9QEt0-HRgd-bY03-Jd9F-51yF-8rcZ-PPDFR4', 'scsi-0QEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd', 'scsi-SQEMU_QEMU_HARDDISK_49605ec5-af84-4e56-b6e7-0932efbf1bcd'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635525 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635537 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--189b9442--6cba--5a76--9378--3098f039bcec-osd--block--189b9442--6cba--5a76--9378--3098f039bcec'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-g0pOpv-Df9H-7sCV-gXAD-ztyf-iKEa-62mI36', 'scsi-0QEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363', 'scsi-SQEMU_QEMU_HARDDISK_9516e090-09d3-47b2-a672-12f5ce683363'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635549 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635560 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5', 'scsi-SQEMU_QEMU_HARDDISK_bfd7083e-59a5-451a-9789-189314eae1f5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635572 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635596 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635613 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635625 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.635636 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635648 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635659 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16', 'scsi-SQEMU_QEMU_HARDDISK_e87ac32e-cfe1-4641-bda3-fc317b60eb0f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635709 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2', 'dm-uuid-LVM-4hDC3ozcjstwWQ3E5UxqBrjJp5mz1cIfsVn5PTVRwsj0jMyjGmhIMfAIPNf2GBTF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635721 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2-osd--block--6bee08d2--4d0c--5efd--9bb6--6357ac0256e2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GbVk8X-bjpf-wsn1-v0bH-HW56-9ucN-vMi0Ec', 'scsi-0QEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73', 'scsi-SQEMU_QEMU_HARDDISK_8c3574da-2fac-4f58-bc83-f51ba9425a73'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635733 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38', 'dm-uuid-LVM-bBiflqjnftduSHS5XiwByNmPAVGwW9bI5l8qlrblgYE7PdOCmSNNWyQdESxdCPSI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7-osd--block--c5ef3a10--bb06--5cc2--b298--3a565f19d9a7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FyKxvb-dSUu-GGIj-2HDa-wyiz-cOn4-Bzoouk', 'scsi-0QEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd', 'scsi-SQEMU_QEMU_HARDDISK_8547d473-0710-428a-9585-3879cf611acd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3', 'scsi-SQEMU_QEMU_HARDDISK_8ef3193b-7b85-4a69-91dc-ff1919c1d0b3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635797 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635809 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635827 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635838 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.635861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635892 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635904 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635916 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635927 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635958 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16', 'scsi-SQEMU_QEMU_HARDDISK_a64d2943-68d6-43ca-9e98-c6f4ed260dcf-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635972 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--4de995f9--e371--53ec--a5e6--95298d442fa2-osd--block--4de995f9--e371--53ec--a5e6--95298d442fa2'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-WoixB1-M6Dg-l8nc-m7Vg-jCLx-Etkb-ybuhtE', 'scsi-0QEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e', 'scsi-SQEMU_QEMU_HARDDISK_5e704911-d475-45db-a46e-b2c1a2edd26e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.635984 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--ea687e85--c7c1--53f3--8dfd--7d637eed1a38-osd--block--ea687e85--c7c1--53f3--8dfd--7d637eed1a38'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-prhbwg-ulIC-7M5H-Gfur-Z1ct-zcRp-etKAGn', 'scsi-0QEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1', 'scsi-SQEMU_QEMU_HARDDISK_ea7e2490-24d2-49b7-b6d3-38bb6098dff1'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.636001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5', 'scsi-SQEMU_QEMU_HARDDISK_bc231350-c60d-45ad-9b08-eb0e8cdec0b5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.636026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-19-16-09-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-19 17:06:07.636038 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636049 | orchestrator | 2025-09-19 17:06:07.636061 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-19 17:06:07.636072 | orchestrator | Friday 19 September 2025 17:04:09 +0000 (0:00:00.549) 0:00:16.759 ****** 2025-09-19 17:06:07.636083 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.636094 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.636105 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.636131 | orchestrator | 2025-09-19 17:06:07.636142 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-19 17:06:07.636153 | orchestrator | Friday 19 September 2025 17:04:10 +0000 (0:00:00.688) 0:00:17.448 ****** 2025-09-19 17:06:07.636164 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.636175 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.636186 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.636196 | orchestrator | 2025-09-19 17:06:07.636207 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 17:06:07.636218 | orchestrator | Friday 19 September 2025 17:04:10 +0000 (0:00:00.519) 0:00:17.967 ****** 2025-09-19 17:06:07.636229 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.636240 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.636251 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.636261 | orchestrator | 2025-09-19 17:06:07.636272 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 17:06:07.636283 | orchestrator | Friday 19 September 2025 17:04:11 +0000 (0:00:00.733) 0:00:18.701 ****** 2025-09-19 17:06:07.636295 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636306 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636317 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636328 | orchestrator | 2025-09-19 17:06:07.636339 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-19 17:06:07.636349 | orchestrator | Friday 19 September 2025 17:04:11 +0000 (0:00:00.312) 0:00:19.014 ****** 2025-09-19 17:06:07.636367 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636378 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636389 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636399 | orchestrator | 2025-09-19 17:06:07.636410 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-19 17:06:07.636421 | orchestrator | Friday 19 September 2025 17:04:12 +0000 (0:00:00.414) 0:00:19.428 ****** 2025-09-19 17:06:07.636432 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636443 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636454 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636465 | orchestrator | 2025-09-19 17:06:07.636476 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-19 17:06:07.636487 | orchestrator | Friday 19 September 2025 17:04:12 +0000 (0:00:00.599) 0:00:20.027 ****** 2025-09-19 17:06:07.636498 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-19 17:06:07.636510 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-19 17:06:07.636521 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-19 17:06:07.636531 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-19 17:06:07.636542 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-19 17:06:07.636553 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-19 17:06:07.636563 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-19 17:06:07.636574 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-19 17:06:07.636585 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-19 17:06:07.636596 | orchestrator | 2025-09-19 17:06:07.636607 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-19 17:06:07.636618 | orchestrator | Friday 19 September 2025 17:04:13 +0000 (0:00:01.002) 0:00:21.029 ****** 2025-09-19 17:06:07.636629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-19 17:06:07.636640 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-19 17:06:07.636651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-19 17:06:07.636661 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636672 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-19 17:06:07.636683 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-19 17:06:07.636693 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-19 17:06:07.636704 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636714 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-19 17:06:07.636725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-19 17:06:07.636736 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-19 17:06:07.636746 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636757 | orchestrator | 2025-09-19 17:06:07.636768 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-19 17:06:07.636779 | orchestrator | Friday 19 September 2025 17:04:14 +0000 (0:00:00.362) 0:00:21.392 ****** 2025-09-19 17:06:07.636790 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:06:07.636801 | orchestrator | 2025-09-19 17:06:07.636812 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-19 17:06:07.636824 | orchestrator | Friday 19 September 2025 17:04:14 +0000 (0:00:00.700) 0:00:22.092 ****** 2025-09-19 17:06:07.636835 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636846 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636857 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636882 | orchestrator | 2025-09-19 17:06:07.636899 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-19 17:06:07.636911 | orchestrator | Friday 19 September 2025 17:04:15 +0000 (0:00:00.337) 0:00:22.430 ****** 2025-09-19 17:06:07.636928 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.636939 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.636955 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.636966 | orchestrator | 2025-09-19 17:06:07.636977 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-19 17:06:07.636988 | orchestrator | Friday 19 September 2025 17:04:15 +0000 (0:00:00.326) 0:00:22.757 ****** 2025-09-19 17:06:07.636999 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.637010 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.637020 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:06:07.637031 | orchestrator | 2025-09-19 17:06:07.637042 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-19 17:06:07.637052 | orchestrator | Friday 19 September 2025 17:04:15 +0000 (0:00:00.327) 0:00:23.085 ****** 2025-09-19 17:06:07.637063 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.637074 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.637085 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.637095 | orchestrator | 2025-09-19 17:06:07.637106 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-19 17:06:07.637117 | orchestrator | Friday 19 September 2025 17:04:16 +0000 (0:00:00.592) 0:00:23.677 ****** 2025-09-19 17:06:07.637128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:06:07.637138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:06:07.637149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:06:07.637160 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.637170 | orchestrator | 2025-09-19 17:06:07.637181 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-19 17:06:07.637192 | orchestrator | Friday 19 September 2025 17:04:16 +0000 (0:00:00.395) 0:00:24.072 ****** 2025-09-19 17:06:07.637203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:06:07.637213 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:06:07.637224 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:06:07.637235 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.637245 | orchestrator | 2025-09-19 17:06:07.637256 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-19 17:06:07.637267 | orchestrator | Friday 19 September 2025 17:04:17 +0000 (0:00:00.373) 0:00:24.446 ****** 2025-09-19 17:06:07.637278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-19 17:06:07.637289 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-19 17:06:07.637300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-19 17:06:07.637311 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.637321 | orchestrator | 2025-09-19 17:06:07.637332 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-19 17:06:07.637343 | orchestrator | Friday 19 September 2025 17:04:17 +0000 (0:00:00.378) 0:00:24.825 ****** 2025-09-19 17:06:07.637354 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:06:07.637365 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:06:07.637376 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:06:07.637386 | orchestrator | 2025-09-19 17:06:07.637397 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-19 17:06:07.637408 | orchestrator | Friday 19 September 2025 17:04:18 +0000 (0:00:00.335) 0:00:25.160 ****** 2025-09-19 17:06:07.637419 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-19 17:06:07.637430 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-19 17:06:07.637441 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-19 17:06:07.637452 | orchestrator | 2025-09-19 17:06:07.637463 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-19 17:06:07.637474 | orchestrator | Friday 19 September 2025 17:04:18 +0000 (0:00:00.524) 0:00:25.685 ****** 2025-09-19 17:06:07.637492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:06:07.637503 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:06:07.637514 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:06:07.637524 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 17:06:07.637535 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 17:06:07.637546 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 17:06:07.637557 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 17:06:07.637567 | orchestrator | 2025-09-19 17:06:07.637578 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-19 17:06:07.637589 | orchestrator | Friday 19 September 2025 17:04:19 +0000 (0:00:00.984) 0:00:26.670 ****** 2025-09-19 17:06:07.637600 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-19 17:06:07.637611 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-19 17:06:07.637622 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-19 17:06:07.637633 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-19 17:06:07.637644 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-19 17:06:07.637655 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-19 17:06:07.637666 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-19 17:06:07.637676 | orchestrator | 2025-09-19 17:06:07.637693 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-19 17:06:07.637704 | orchestrator | Friday 19 September 2025 17:04:21 +0000 (0:00:01.921) 0:00:28.592 ****** 2025-09-19 17:06:07.637715 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:06:07.637726 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:06:07.637742 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-19 17:06:07.637754 | orchestrator | 2025-09-19 17:06:07.637765 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-19 17:06:07.637775 | orchestrator | Friday 19 September 2025 17:04:21 +0000 (0:00:00.371) 0:00:28.963 ****** 2025-09-19 17:06:07.637787 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:06:07.637799 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:06:07.637810 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:06:07.637822 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:06:07.637833 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-19 17:06:07.637855 | orchestrator | 2025-09-19 17:06:07.637866 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-19 17:06:07.637945 | orchestrator | Friday 19 September 2025 17:05:08 +0000 (0:00:46.998) 0:01:15.961 ****** 2025-09-19 17:06:07.637956 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.637967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.637978 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.637989 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638000 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638010 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638071 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-19 17:06:07.638083 | orchestrator | 2025-09-19 17:06:07.638094 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-19 17:06:07.638104 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:25.652) 0:01:41.614 ****** 2025-09-19 17:06:07.638114 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638124 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638133 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638143 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638152 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638162 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638172 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-19 17:06:07.638181 | orchestrator | 2025-09-19 17:06:07.638191 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-19 17:06:07.638201 | orchestrator | Friday 19 September 2025 17:05:47 +0000 (0:00:12.783) 0:01:54.397 ****** 2025-09-19 17:06:07.638210 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638220 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638230 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638239 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638249 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638259 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638276 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638286 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638296 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638311 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638321 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638331 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638341 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638351 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638360 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638379 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-19 17:06:07.638389 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-19 17:06:07.638398 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-19 17:06:07.638408 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-19 17:06:07.638417 | orchestrator | 2025-09-19 17:06:07.638427 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:06:07.638437 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-19 17:06:07.638448 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-19 17:06:07.638458 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 17:06:07.638468 | orchestrator | 2025-09-19 17:06:07.638478 | orchestrator | 2025-09-19 17:06:07.638487 | orchestrator | 2025-09-19 17:06:07.638497 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:06:07.638507 | orchestrator | Friday 19 September 2025 17:06:05 +0000 (0:00:18.492) 0:02:12.890 ****** 2025-09-19 17:06:07.638517 | orchestrator | =============================================================================== 2025-09-19 17:06:07.638527 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.00s 2025-09-19 17:06:07.638536 | orchestrator | generate keys ---------------------------------------------------------- 25.65s 2025-09-19 17:06:07.638546 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.49s 2025-09-19 17:06:07.638555 | orchestrator | get keys from monitors ------------------------------------------------- 12.78s 2025-09-19 17:06:07.638565 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.15s 2025-09-19 17:06:07.638575 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.08s 2025-09-19 17:06:07.638584 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.92s 2025-09-19 17:06:07.638594 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.00s 2025-09-19 17:06:07.638604 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-09-19 17:06:07.638613 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.77s 2025-09-19 17:06:07.638623 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.73s 2025-09-19 17:06:07.638633 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.72s 2025-09-19 17:06:07.638642 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.70s 2025-09-19 17:06:07.638652 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.69s 2025-09-19 17:06:07.638661 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.67s 2025-09-19 17:06:07.638671 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.61s 2025-09-19 17:06:07.638681 | orchestrator | ceph-facts : Set osd_pool_default_crush_rule fact ----------------------- 0.60s 2025-09-19 17:06:07.638691 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-09-19 17:06:07.638700 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.58s 2025-09-19 17:06:07.638710 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.55s 2025-09-19 17:06:07.638720 | orchestrator | 2025-09-19 17:06:07 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:07.638730 | orchestrator | 2025-09-19 17:06:07 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:07.638740 | orchestrator | 2025-09-19 17:06:07 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:07.638757 | orchestrator | 2025-09-19 17:06:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:10.691700 | orchestrator | 2025-09-19 17:06:10 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:10.693113 | orchestrator | 2025-09-19 17:06:10 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:10.695145 | orchestrator | 2025-09-19 17:06:10 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:10.695395 | orchestrator | 2025-09-19 17:06:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:13.747703 | orchestrator | 2025-09-19 17:06:13 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:13.749802 | orchestrator | 2025-09-19 17:06:13 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:13.751346 | orchestrator | 2025-09-19 17:06:13 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:13.751536 | orchestrator | 2025-09-19 17:06:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:16.796961 | orchestrator | 2025-09-19 17:06:16 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:16.797312 | orchestrator | 2025-09-19 17:06:16 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:16.798591 | orchestrator | 2025-09-19 17:06:16 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:16.798615 | orchestrator | 2025-09-19 17:06:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:19.855403 | orchestrator | 2025-09-19 17:06:19 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:19.859314 | orchestrator | 2025-09-19 17:06:19 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:19.860864 | orchestrator | 2025-09-19 17:06:19 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:19.861390 | orchestrator | 2025-09-19 17:06:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:22.910587 | orchestrator | 2025-09-19 17:06:22 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:22.910794 | orchestrator | 2025-09-19 17:06:22 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:22.911623 | orchestrator | 2025-09-19 17:06:22 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:22.911664 | orchestrator | 2025-09-19 17:06:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:25.964292 | orchestrator | 2025-09-19 17:06:25 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:25.966999 | orchestrator | 2025-09-19 17:06:25 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:25.968621 | orchestrator | 2025-09-19 17:06:25 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:25.968650 | orchestrator | 2025-09-19 17:06:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:29.008370 | orchestrator | 2025-09-19 17:06:29 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:29.009472 | orchestrator | 2025-09-19 17:06:29 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state STARTED 2025-09-19 17:06:29.011146 | orchestrator | 2025-09-19 17:06:29 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:29.011211 | orchestrator | 2025-09-19 17:06:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:32.053476 | orchestrator | 2025-09-19 17:06:32 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state STARTED 2025-09-19 17:06:32.053575 | orchestrator | 2025-09-19 17:06:32 | INFO  | Task 480fa2ca-0f28-4e73-952a-e69a9d638c75 is in state SUCCESS 2025-09-19 17:06:32.053649 | orchestrator | 2025-09-19 17:06:32.055503 | orchestrator | 2025-09-19 17:06:32.055542 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:06:32.055555 | orchestrator | 2025-09-19 17:06:32.055568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:06:32.055580 | orchestrator | Friday 19 September 2025 17:04:46 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-09-19 17:06:32.055592 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.055605 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.055633 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.055646 | orchestrator | 2025-09-19 17:06:32.055659 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:06:32.055671 | orchestrator | Friday 19 September 2025 17:04:47 +0000 (0:00:00.245) 0:00:00.480 ****** 2025-09-19 17:06:32.055683 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-19 17:06:32.055696 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-19 17:06:32.055707 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-19 17:06:32.055719 | orchestrator | 2025-09-19 17:06:32.055731 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-19 17:06:32.055743 | orchestrator | 2025-09-19 17:06:32.055755 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 17:06:32.055767 | orchestrator | Friday 19 September 2025 17:04:47 +0000 (0:00:00.335) 0:00:00.815 ****** 2025-09-19 17:06:32.055779 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:06:32.055791 | orchestrator | 2025-09-19 17:06:32.055819 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-19 17:06:32.055831 | orchestrator | Friday 19 September 2025 17:04:48 +0000 (0:00:00.434) 0:00:01.250 ****** 2025-09-19 17:06:32.055849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.055940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.055958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.055979 | orchestrator | 2025-09-19 17:06:32.055990 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-19 17:06:32.056002 | orchestrator | Friday 19 September 2025 17:04:49 +0000 (0:00:01.090) 0:00:02.341 ****** 2025-09-19 17:06:32.056013 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.056023 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.056034 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.056045 | orchestrator | 2025-09-19 17:06:32.056056 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 17:06:32.056067 | orchestrator | Friday 19 September 2025 17:04:49 +0000 (0:00:00.373) 0:00:02.714 ****** 2025-09-19 17:06:32.056078 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 17:06:32.056095 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 17:06:32.056106 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 17:06:32.056117 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 17:06:32.056128 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 17:06:32.056139 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 17:06:32.056150 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-19 17:06:32.056160 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 17:06:32.056171 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 17:06:32.056182 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 17:06:32.056193 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 17:06:32.056204 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 17:06:32.056214 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 17:06:32.056230 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 17:06:32.056241 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-19 17:06:32.056252 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 17:06:32.056263 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-19 17:06:32.056274 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-19 17:06:32.056285 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-19 17:06:32.056295 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-19 17:06:32.056306 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-19 17:06:32.056318 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-19 17:06:32.056329 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-19 17:06:32.056340 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-19 17:06:32.056358 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-19 17:06:32.056371 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-19 17:06:32.056382 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-19 17:06:32.056393 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-19 17:06:32.056404 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-19 17:06:32.056415 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-19 17:06:32.056426 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-19 17:06:32.056436 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-19 17:06:32.056447 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-19 17:06:32.056459 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-19 17:06:32.056470 | orchestrator | 2025-09-19 17:06:32.056481 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.056492 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.638) 0:00:03.353 ****** 2025-09-19 17:06:32.056503 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.056514 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.056525 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.056536 | orchestrator | 2025-09-19 17:06:32.056547 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.056558 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.265) 0:00:03.619 ****** 2025-09-19 17:06:32.056569 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.056580 | orchestrator | 2025-09-19 17:06:32.056596 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.056608 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.131) 0:00:03.750 ****** 2025-09-19 17:06:32.056619 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.056630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.056641 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.056652 | orchestrator | 2025-09-19 17:06:32.056663 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.056674 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.354) 0:00:04.104 ****** 2025-09-19 17:06:32.056685 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.056696 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.056707 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.056718 | orchestrator | 2025-09-19 17:06:32.056729 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.056740 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.282) 0:00:04.386 ****** 2025-09-19 17:06:32.056751 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.056762 | orchestrator | 2025-09-19 17:06:32.056773 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.056790 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.120) 0:00:04.507 ****** 2025-09-19 17:06:32.056801 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.056812 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.056823 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.056834 | orchestrator | 2025-09-19 17:06:32.056845 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.056861 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.235) 0:00:04.742 ****** 2025-09-19 17:06:32.056891 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.056902 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.056913 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.056924 | orchestrator | 2025-09-19 17:06:32.056935 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.056946 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.268) 0:00:05.011 ****** 2025-09-19 17:06:32.056957 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.056968 | orchestrator | 2025-09-19 17:06:32.056980 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.056991 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.129) 0:00:05.141 ****** 2025-09-19 17:06:32.057002 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057013 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057024 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057034 | orchestrator | 2025-09-19 17:06:32.057046 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.057057 | orchestrator | Friday 19 September 2025 17:04:52 +0000 (0:00:00.413) 0:00:05.555 ****** 2025-09-19 17:06:32.057068 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.057079 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.057090 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.057101 | orchestrator | 2025-09-19 17:06:32.057111 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.057122 | orchestrator | Friday 19 September 2025 17:04:52 +0000 (0:00:00.289) 0:00:05.845 ****** 2025-09-19 17:06:32.057133 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057144 | orchestrator | 2025-09-19 17:06:32.057155 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.057166 | orchestrator | Friday 19 September 2025 17:04:52 +0000 (0:00:00.137) 0:00:05.982 ****** 2025-09-19 17:06:32.057177 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057188 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057199 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057210 | orchestrator | 2025-09-19 17:06:32.057221 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.057232 | orchestrator | Friday 19 September 2025 17:04:53 +0000 (0:00:00.279) 0:00:06.262 ****** 2025-09-19 17:06:32.057243 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.057254 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.057265 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.057276 | orchestrator | 2025-09-19 17:06:32.057287 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.057298 | orchestrator | Friday 19 September 2025 17:04:53 +0000 (0:00:00.313) 0:00:06.576 ****** 2025-09-19 17:06:32.057309 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057320 | orchestrator | 2025-09-19 17:06:32.057331 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.057342 | orchestrator | Friday 19 September 2025 17:04:53 +0000 (0:00:00.330) 0:00:06.907 ****** 2025-09-19 17:06:32.057353 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057375 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057386 | orchestrator | 2025-09-19 17:06:32.057397 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.057408 | orchestrator | Friday 19 September 2025 17:04:53 +0000 (0:00:00.302) 0:00:07.209 ****** 2025-09-19 17:06:32.057426 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.057437 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.057448 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.057472 | orchestrator | 2025-09-19 17:06:32.057483 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.057505 | orchestrator | Friday 19 September 2025 17:04:54 +0000 (0:00:00.313) 0:00:07.523 ****** 2025-09-19 17:06:32.057516 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057527 | orchestrator | 2025-09-19 17:06:32.057538 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.057549 | orchestrator | Friday 19 September 2025 17:04:54 +0000 (0:00:00.123) 0:00:07.646 ****** 2025-09-19 17:06:32.057560 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057571 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057582 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057593 | orchestrator | 2025-09-19 17:06:32.057604 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.057620 | orchestrator | Friday 19 September 2025 17:04:54 +0000 (0:00:00.298) 0:00:07.945 ****** 2025-09-19 17:06:32.057631 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.057642 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.057653 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.057664 | orchestrator | 2025-09-19 17:06:32.057675 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.057686 | orchestrator | Friday 19 September 2025 17:04:55 +0000 (0:00:00.528) 0:00:08.474 ****** 2025-09-19 17:06:32.057697 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057708 | orchestrator | 2025-09-19 17:06:32.057719 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.057730 | orchestrator | Friday 19 September 2025 17:04:55 +0000 (0:00:00.138) 0:00:08.613 ****** 2025-09-19 17:06:32.057741 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057752 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057762 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057773 | orchestrator | 2025-09-19 17:06:32.057784 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.057795 | orchestrator | Friday 19 September 2025 17:04:55 +0000 (0:00:00.277) 0:00:08.891 ****** 2025-09-19 17:06:32.057806 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.057817 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.057828 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.057839 | orchestrator | 2025-09-19 17:06:32.057850 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.057861 | orchestrator | Friday 19 September 2025 17:04:55 +0000 (0:00:00.309) 0:00:09.201 ****** 2025-09-19 17:06:32.057871 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057912 | orchestrator | 2025-09-19 17:06:32.057928 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.057939 | orchestrator | Friday 19 September 2025 17:04:56 +0000 (0:00:00.114) 0:00:09.315 ****** 2025-09-19 17:06:32.057950 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.057961 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.057972 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.057982 | orchestrator | 2025-09-19 17:06:32.057993 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.058004 | orchestrator | Friday 19 September 2025 17:04:56 +0000 (0:00:00.283) 0:00:09.598 ****** 2025-09-19 17:06:32.058060 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.058075 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.058086 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.058097 | orchestrator | 2025-09-19 17:06:32.058108 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.058119 | orchestrator | Friday 19 September 2025 17:04:56 +0000 (0:00:00.532) 0:00:10.131 ****** 2025-09-19 17:06:32.058140 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058151 | orchestrator | 2025-09-19 17:06:32.058162 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.058173 | orchestrator | Friday 19 September 2025 17:04:57 +0000 (0:00:00.129) 0:00:10.260 ****** 2025-09-19 17:06:32.058184 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058195 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.058206 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.058217 | orchestrator | 2025-09-19 17:06:32.058227 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-19 17:06:32.058238 | orchestrator | Friday 19 September 2025 17:04:57 +0000 (0:00:00.295) 0:00:10.556 ****** 2025-09-19 17:06:32.058249 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:06:32.058260 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:06:32.058271 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:06:32.058282 | orchestrator | 2025-09-19 17:06:32.058293 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-19 17:06:32.058304 | orchestrator | Friday 19 September 2025 17:04:57 +0000 (0:00:00.320) 0:00:10.876 ****** 2025-09-19 17:06:32.058315 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058325 | orchestrator | 2025-09-19 17:06:32.058337 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-19 17:06:32.058348 | orchestrator | Friday 19 September 2025 17:04:57 +0000 (0:00:00.123) 0:00:10.999 ****** 2025-09-19 17:06:32.058359 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058370 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.058380 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.058391 | orchestrator | 2025-09-19 17:06:32.058402 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-19 17:06:32.058413 | orchestrator | Friday 19 September 2025 17:04:58 +0000 (0:00:00.473) 0:00:11.473 ****** 2025-09-19 17:06:32.058424 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:06:32.058435 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:06:32.058446 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:06:32.058457 | orchestrator | 2025-09-19 17:06:32.058468 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-19 17:06:32.058479 | orchestrator | Friday 19 September 2025 17:05:00 +0000 (0:00:01.798) 0:00:13.272 ****** 2025-09-19 17:06:32.058490 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 17:06:32.058501 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 17:06:32.058512 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-19 17:06:32.058522 | orchestrator | 2025-09-19 17:06:32.058533 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-19 17:06:32.058544 | orchestrator | Friday 19 September 2025 17:05:01 +0000 (0:00:01.921) 0:00:15.193 ****** 2025-09-19 17:06:32.058555 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 17:06:32.058566 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 17:06:32.058577 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-19 17:06:32.058588 | orchestrator | 2025-09-19 17:06:32.058599 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-19 17:06:32.058617 | orchestrator | Friday 19 September 2025 17:05:04 +0000 (0:00:02.160) 0:00:17.353 ****** 2025-09-19 17:06:32.058628 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 17:06:32.058639 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 17:06:32.058650 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-19 17:06:32.058668 | orchestrator | 2025-09-19 17:06:32.058679 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-19 17:06:32.058690 | orchestrator | Friday 19 September 2025 17:05:06 +0000 (0:00:02.012) 0:00:19.365 ****** 2025-09-19 17:06:32.058701 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058712 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.058723 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.058734 | orchestrator | 2025-09-19 17:06:32.058745 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-19 17:06:32.058756 | orchestrator | Friday 19 September 2025 17:05:06 +0000 (0:00:00.306) 0:00:19.671 ****** 2025-09-19 17:06:32.058767 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.058778 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.058789 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.058799 | orchestrator | 2025-09-19 17:06:32.058810 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 17:06:32.058827 | orchestrator | Friday 19 September 2025 17:05:06 +0000 (0:00:00.299) 0:00:19.971 ****** 2025-09-19 17:06:32.058838 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:06:32.058849 | orchestrator | 2025-09-19 17:06:32.058860 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-19 17:06:32.058871 | orchestrator | Friday 19 September 2025 17:05:07 +0000 (0:00:00.610) 0:00:20.581 ****** 2025-09-19 17:06:32.058901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.058929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.058956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.058968 | orchestrator | 2025-09-19 17:06:32.058979 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-19 17:06:32.058996 | orchestrator | Friday 19 September 2025 17:05:09 +0000 (0:00:01.682) 0:00:22.264 ****** 2025-09-19 17:06:32.059023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059036 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.059054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059073 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.059091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059103 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.059114 | orchestrator | 2025-09-19 17:06:32.059125 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-19 17:06:32.059136 | orchestrator | Friday 19 September 2025 17:05:09 +0000 (0:00:00.614) 0:00:22.878 ****** 2025-09-19 17:06:32.059154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.059190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059203 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.059222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-19 17:06:32.059241 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.059251 | orchestrator | 2025-09-19 17:06:32.059262 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-19 17:06:32.059273 | orchestrator | Friday 19 September 2025 17:05:10 +0000 (0:00:00.832) 0:00:23.711 ****** 2025-09-19 17:06:32.059290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.059322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.059336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-19 17:06:32.059355 | orchestrator | 2025-09-19 17:06:32.059366 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 17:06:32.059377 | orchestrator | Friday 19 September 2025 17:05:12 +0000 (0:00:01.864) 0:00:25.576 ****** 2025-09-19 17:06:32.059388 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:06:32.059399 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:06:32.059409 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:06:32.059420 | orchestrator | 2025-09-19 17:06:32.059431 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-19 17:06:32.059442 | orchestrator | Friday 19 September 2025 17:05:12 +0000 (0:00:00.297) 0:00:25.873 ****** 2025-09-19 17:06:32.059453 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:06:32.059464 | orchestrator | 2025-09-19 17:06:32.059475 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-19 17:06:32.059491 | orchestrator | Friday 19 September 2025 17:05:13 +0000 (0:00:00.526) 0:00:26.400 ****** 2025-09-19 17:06:32.059502 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:06:32.059513 | orchestrator | 2025-09-19 17:06:32.059524 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-19 17:06:32.059535 | orchestrator | Friday 19 September 2025 17:05:15 +0000 (0:00:02.241) 0:00:28.642 ****** 2025-09-19 17:06:32.059546 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:06:32.059556 | orchestrator | 2025-09-19 17:06:32.059567 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-19 17:06:32.059578 | orchestrator | Friday 19 September 2025 17:05:18 +0000 (0:00:02.853) 0:00:31.496 ****** 2025-09-19 17:06:32.059589 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:06:32.059600 | orchestrator | 2025-09-19 17:06:32.059611 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 17:06:32.059622 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:16.372) 0:00:47.868 ****** 2025-09-19 17:06:32.059633 | orchestrator | 2025-09-19 17:06:32.059644 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 17:06:32.059655 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:00.067) 0:00:47.936 ****** 2025-09-19 17:06:32.059666 | orchestrator | 2025-09-19 17:06:32.059676 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-19 17:06:32.059687 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:00.062) 0:00:47.999 ****** 2025-09-19 17:06:32.059698 | orchestrator | 2025-09-19 17:06:32.059709 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-19 17:06:32.059725 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:00.071) 0:00:48.070 ****** 2025-09-19 17:06:32.059736 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:06:32.059747 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:06:32.059758 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:06:32.059768 | orchestrator | 2025-09-19 17:06:32.059779 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:06:32.059791 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-19 17:06:32.059802 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 17:06:32.059813 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-19 17:06:32.059824 | orchestrator | 2025-09-19 17:06:32.059835 | orchestrator | 2025-09-19 17:06:32.059846 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:06:32.059863 | orchestrator | Friday 19 September 2025 17:06:29 +0000 (0:00:54.923) 0:01:42.994 ****** 2025-09-19 17:06:32.059892 | orchestrator | =============================================================================== 2025-09-19 17:06:32.059904 | orchestrator | horizon : Restart horizon container ------------------------------------ 54.92s 2025-09-19 17:06:32.059914 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.37s 2025-09-19 17:06:32.059925 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.85s 2025-09-19 17:06:32.059936 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.24s 2025-09-19 17:06:32.059947 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.16s 2025-09-19 17:06:32.059957 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.01s 2025-09-19 17:06:32.059968 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.92s 2025-09-19 17:06:32.059979 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.86s 2025-09-19 17:06:32.059990 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.80s 2025-09-19 17:06:32.060000 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.68s 2025-09-19 17:06:32.060011 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.09s 2025-09-19 17:06:32.060022 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.83s 2025-09-19 17:06:32.060032 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.64s 2025-09-19 17:06:32.060043 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-09-19 17:06:32.060054 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2025-09-19 17:06:32.060065 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-19 17:06:32.060075 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2025-09-19 17:06:32.060086 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.53s 2025-09-19 17:06:32.060097 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.47s 2025-09-19 17:06:32.060108 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.43s 2025-09-19 17:06:32.060119 | orchestrator | 2025-09-19 17:06:32 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:32.060129 | orchestrator | 2025-09-19 17:06:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:35.096480 | orchestrator | 2025-09-19 17:06:35 | INFO  | Task 7b6199db-9aaa-4b86-bb31-d34feae65f1a is in state SUCCESS 2025-09-19 17:06:35.097562 | orchestrator | 2025-09-19 17:06:35 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:35.097590 | orchestrator | 2025-09-19 17:06:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:38.147206 | orchestrator | 2025-09-19 17:06:38 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:38.148754 | orchestrator | 2025-09-19 17:06:38 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:38.149135 | orchestrator | 2025-09-19 17:06:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:41.201298 | orchestrator | 2025-09-19 17:06:41 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:41.202833 | orchestrator | 2025-09-19 17:06:41 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:41.202868 | orchestrator | 2025-09-19 17:06:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:44.252694 | orchestrator | 2025-09-19 17:06:44 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:44.253588 | orchestrator | 2025-09-19 17:06:44 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:44.253639 | orchestrator | 2025-09-19 17:06:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:47.291869 | orchestrator | 2025-09-19 17:06:47 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:47.293926 | orchestrator | 2025-09-19 17:06:47 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:47.293953 | orchestrator | 2025-09-19 17:06:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:50.334686 | orchestrator | 2025-09-19 17:06:50 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:50.335789 | orchestrator | 2025-09-19 17:06:50 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:50.335819 | orchestrator | 2025-09-19 17:06:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:53.374856 | orchestrator | 2025-09-19 17:06:53 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:53.375012 | orchestrator | 2025-09-19 17:06:53 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:53.375026 | orchestrator | 2025-09-19 17:06:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:56.409345 | orchestrator | 2025-09-19 17:06:56 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:56.410584 | orchestrator | 2025-09-19 17:06:56 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:56.410597 | orchestrator | 2025-09-19 17:06:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:06:59.455001 | orchestrator | 2025-09-19 17:06:59 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:06:59.456586 | orchestrator | 2025-09-19 17:06:59 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:06:59.456623 | orchestrator | 2025-09-19 17:06:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:02.491535 | orchestrator | 2025-09-19 17:07:02 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:02.492629 | orchestrator | 2025-09-19 17:07:02 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:02.492658 | orchestrator | 2025-09-19 17:07:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:05.530729 | orchestrator | 2025-09-19 17:07:05 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:05.532002 | orchestrator | 2025-09-19 17:07:05 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:05.532179 | orchestrator | 2025-09-19 17:07:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:08.572783 | orchestrator | 2025-09-19 17:07:08 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:08.574659 | orchestrator | 2025-09-19 17:07:08 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:08.574847 | orchestrator | 2025-09-19 17:07:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:11.617345 | orchestrator | 2025-09-19 17:07:11 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:11.619149 | orchestrator | 2025-09-19 17:07:11 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:11.619190 | orchestrator | 2025-09-19 17:07:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:14.657593 | orchestrator | 2025-09-19 17:07:14 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:14.659991 | orchestrator | 2025-09-19 17:07:14 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:14.660033 | orchestrator | 2025-09-19 17:07:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:17.701086 | orchestrator | 2025-09-19 17:07:17 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:17.701487 | orchestrator | 2025-09-19 17:07:17 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:17.701511 | orchestrator | 2025-09-19 17:07:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:20.735507 | orchestrator | 2025-09-19 17:07:20 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:20.736977 | orchestrator | 2025-09-19 17:07:20 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:20.737018 | orchestrator | 2025-09-19 17:07:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:23.779502 | orchestrator | 2025-09-19 17:07:23 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:23.780162 | orchestrator | 2025-09-19 17:07:23 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state STARTED 2025-09-19 17:07:23.780204 | orchestrator | 2025-09-19 17:07:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:26.865639 | orchestrator | 2025-09-19 17:07:26.865734 | orchestrator | 2025-09-19 17:07:26.865749 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-19 17:07:26.865761 | orchestrator | 2025-09-19 17:07:26.865771 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-19 17:07:26.865782 | orchestrator | Friday 19 September 2025 17:06:10 +0000 (0:00:00.154) 0:00:00.154 ****** 2025-09-19 17:07:26.865792 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-19 17:07:26.865813 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.865823 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.865833 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:07:26.865843 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.865867 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-19 17:07:26.865908 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-19 17:07:26.865919 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-19 17:07:26.865939 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-19 17:07:26.865949 | orchestrator | 2025-09-19 17:07:26.865959 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-19 17:07:26.865969 | orchestrator | Friday 19 September 2025 17:06:14 +0000 (0:00:04.741) 0:00:04.896 ****** 2025-09-19 17:07:26.865979 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-19 17:07:26.865989 | orchestrator | 2025-09-19 17:07:26.866008 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-19 17:07:26.866070 | orchestrator | Friday 19 September 2025 17:06:15 +0000 (0:00:01.010) 0:00:05.906 ****** 2025-09-19 17:07:26.866081 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-19 17:07:26.866092 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866124 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866135 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:07:26.866145 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866154 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-19 17:07:26.866164 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-19 17:07:26.866174 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-19 17:07:26.866185 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-19 17:07:26.866196 | orchestrator | 2025-09-19 17:07:26.866208 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-19 17:07:26.866219 | orchestrator | Friday 19 September 2025 17:06:28 +0000 (0:00:12.348) 0:00:18.254 ****** 2025-09-19 17:07:26.866231 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-19 17:07:26.866242 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866253 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866264 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:07:26.866276 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-19 17:07:26.866287 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-19 17:07:26.866298 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-19 17:07:26.866309 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-19 17:07:26.866320 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-19 17:07:26.866331 | orchestrator | 2025-09-19 17:07:26.866340 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:07:26.866350 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:07:26.866361 | orchestrator | 2025-09-19 17:07:26.866371 | orchestrator | 2025-09-19 17:07:26.866380 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:07:26.866401 | orchestrator | Friday 19 September 2025 17:06:34 +0000 (0:00:06.033) 0:00:24.288 ****** 2025-09-19 17:07:26.866411 | orchestrator | =============================================================================== 2025-09-19 17:07:26.866421 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.35s 2025-09-19 17:07:26.866443 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.03s 2025-09-19 17:07:26.866453 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.74s 2025-09-19 17:07:26.866463 | orchestrator | Create share directory -------------------------------------------------- 1.01s 2025-09-19 17:07:26.866472 | orchestrator | 2025-09-19 17:07:26.866481 | orchestrator | 2025-09-19 17:07:26.866492 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:07:26.866501 | orchestrator | 2025-09-19 17:07:26.866533 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:07:26.866543 | orchestrator | Friday 19 September 2025 17:04:46 +0000 (0:00:00.235) 0:00:00.235 ****** 2025-09-19 17:07:26.866553 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.866563 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.866573 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.866582 | orchestrator | 2025-09-19 17:07:26.866592 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:07:26.866601 | orchestrator | Friday 19 September 2025 17:04:47 +0000 (0:00:00.250) 0:00:00.485 ****** 2025-09-19 17:07:26.866611 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 17:07:26.866631 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 17:07:26.866641 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 17:07:26.866651 | orchestrator | 2025-09-19 17:07:26.866660 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-19 17:07:26.866670 | orchestrator | 2025-09-19 17:07:26.866680 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.866689 | orchestrator | Friday 19 September 2025 17:04:47 +0000 (0:00:00.341) 0:00:00.827 ****** 2025-09-19 17:07:26.866699 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:07:26.866708 | orchestrator | 2025-09-19 17:07:26.866718 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-19 17:07:26.866727 | orchestrator | Friday 19 September 2025 17:04:48 +0000 (0:00:00.491) 0:00:01.318 ****** 2025-09-19 17:07:26.866741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.866755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.866782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.866803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.866867 | orchestrator | 2025-09-19 17:07:26.866877 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-19 17:07:26.866909 | orchestrator | Friday 19 September 2025 17:04:49 +0000 (0:00:01.698) 0:00:03.017 ****** 2025-09-19 17:07:26.866928 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-19 17:07:26.866938 | orchestrator | 2025-09-19 17:07:26.866948 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-19 17:07:26.866963 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.628) 0:00:03.645 ****** 2025-09-19 17:07:26.866973 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.866982 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.866992 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.867002 | orchestrator | 2025-09-19 17:07:26.867011 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-19 17:07:26.867021 | orchestrator | Friday 19 September 2025 17:04:50 +0000 (0:00:00.379) 0:00:04.025 ****** 2025-09-19 17:07:26.867031 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:07:26.867040 | orchestrator | 2025-09-19 17:07:26.867050 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.867059 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.639) 0:00:04.664 ****** 2025-09-19 17:07:26.867069 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:07:26.867079 | orchestrator | 2025-09-19 17:07:26.867089 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-19 17:07:26.867098 | orchestrator | Friday 19 September 2025 17:04:51 +0000 (0:00:00.462) 0:00:05.126 ****** 2025-09-19 17:07:26.867109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867254 | orchestrator | 2025-09-19 17:07:26.867264 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-19 17:07:26.867274 | orchestrator | Friday 19 September 2025 17:04:54 +0000 (0:00:03.100) 0:00:08.226 ****** 2025-09-19 17:07:26.867296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.867339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.867398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867429 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.867439 | orchestrator | 2025-09-19 17:07:26.867449 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-19 17:07:26.867459 | orchestrator | Friday 19 September 2025 17:04:55 +0000 (0:00:00.779) 0:00:09.006 ****** 2025-09-19 17:07:26.867469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867522 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.867532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867600 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.867615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-19 17:07:26.867634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/koll2025-09-19 17:07:26 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state STARTED 2025-09-19 17:07:26.867644 | orchestrator | 2025-09-19 17:07:26 | INFO  | Task 3544ba8b-0f74-4509-9c2b-bc651d2950f9 is in state SUCCESS 2025-09-19 17:07:26.867655 | orchestrator | a/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-19 17:07:26.867677 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.867687 | orchestrator | 2025-09-19 17:07:26.867697 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-19 17:07:26.867707 | orchestrator | Friday 19 September 2025 17:04:56 +0000 (0:00:00.728) 0:00:09.734 ****** 2025-09-19 17:07:26.867717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867789 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.867840 | orchestrator | 2025-09-19 17:07:26.867855 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-19 17:07:26.867865 | orchestrator | Friday 19 September 2025 17:04:59 +0000 (0:00:03.342) 0:00:13.077 ****** 2025-09-19 17:07:26.867876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.867979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.867990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868026 | orchestrator | 2025-09-19 17:07:26.868035 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-19 17:07:26.868046 | orchestrator | Friday 19 September 2025 17:05:05 +0000 (0:00:05.346) 0:00:18.423 ****** 2025-09-19 17:07:26.868064 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.868082 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:07:26.868099 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:07:26.868123 | orchestrator | 2025-09-19 17:07:26.868143 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-19 17:07:26.868160 | orchestrator | Friday 19 September 2025 17:05:06 +0000 (0:00:01.448) 0:00:19.871 ****** 2025-09-19 17:07:26.868177 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.868194 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.868211 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.868228 | orchestrator | 2025-09-19 17:07:26.868246 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-19 17:07:26.868264 | orchestrator | Friday 19 September 2025 17:05:07 +0000 (0:00:00.517) 0:00:20.389 ****** 2025-09-19 17:07:26.868282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.868299 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.868316 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.868326 | orchestrator | 2025-09-19 17:07:26.868336 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-19 17:07:26.868345 | orchestrator | Friday 19 September 2025 17:05:07 +0000 (0:00:00.296) 0:00:20.686 ****** 2025-09-19 17:07:26.868355 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.868364 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.868374 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.868383 | orchestrator | 2025-09-19 17:07:26.868399 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-19 17:07:26.868409 | orchestrator | Friday 19 September 2025 17:05:07 +0000 (0:00:00.464) 0:00:21.150 ****** 2025-09-19 17:07:26.868429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.868450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.868461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.868472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.868492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.868503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-19 17:07:26.868520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.868550 | orchestrator | 2025-09-19 17:07:26.868560 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.868570 | orchestrator | Friday 19 September 2025 17:05:10 +0000 (0:00:02.306) 0:00:23.456 ****** 2025-09-19 17:07:26.868580 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.868590 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.868599 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.868609 | orchestrator | 2025-09-19 17:07:26.868618 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-19 17:07:26.868628 | orchestrator | Friday 19 September 2025 17:05:10 +0000 (0:00:00.303) 0:00:23.760 ****** 2025-09-19 17:07:26.868637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 17:07:26.868647 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 17:07:26.868657 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-19 17:07:26.868667 | orchestrator | 2025-09-19 17:07:26.868676 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-19 17:07:26.868686 | orchestrator | Friday 19 September 2025 17:05:12 +0000 (0:00:01.971) 0:00:25.731 ****** 2025-09-19 17:07:26.868696 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:07:26.868706 | orchestrator | 2025-09-19 17:07:26.868715 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-19 17:07:26.868725 | orchestrator | Friday 19 September 2025 17:05:13 +0000 (0:00:00.937) 0:00:26.669 ****** 2025-09-19 17:07:26.868735 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.868744 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.868754 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.868763 | orchestrator | 2025-09-19 17:07:26.868773 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-19 17:07:26.868796 | orchestrator | Friday 19 September 2025 17:05:14 +0000 (0:00:00.787) 0:00:27.456 ****** 2025-09-19 17:07:26.868806 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:07:26.868816 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 17:07:26.868826 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 17:07:26.868836 | orchestrator | 2025-09-19 17:07:26.868851 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-19 17:07:26.868861 | orchestrator | Friday 19 September 2025 17:05:15 +0000 (0:00:00.979) 0:00:28.436 ****** 2025-09-19 17:07:26.868871 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.868880 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.868941 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.868951 | orchestrator | 2025-09-19 17:07:26.868961 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-19 17:07:26.868971 | orchestrator | Friday 19 September 2025 17:05:15 +0000 (0:00:00.314) 0:00:28.751 ****** 2025-09-19 17:07:26.868981 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 17:07:26.868990 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 17:07:26.869000 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-19 17:07:26.869010 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 17:07:26.869020 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 17:07:26.869029 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-19 17:07:26.869039 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 17:07:26.869049 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 17:07:26.869059 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-19 17:07:26.869068 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 17:07:26.869078 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 17:07:26.869088 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-19 17:07:26.869097 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 17:07:26.869107 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 17:07:26.869117 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-19 17:07:26.869126 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:07:26.869136 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:07:26.869146 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:07:26.869156 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:07:26.869166 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:07:26.869175 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:07:26.869185 | orchestrator | 2025-09-19 17:07:26.869195 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-19 17:07:26.869205 | orchestrator | Friday 19 September 2025 17:05:24 +0000 (0:00:09.130) 0:00:37.881 ****** 2025-09-19 17:07:26.869214 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:07:26.869230 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:07:26.869240 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:07:26.869250 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:07:26.869259 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:07:26.869269 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:07:26.869278 | orchestrator | 2025-09-19 17:07:26.869288 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-19 17:07:26.869298 | orchestrator | Friday 19 September 2025 17:05:27 +0000 (0:00:02.839) 0:00:40.721 ****** 2025-09-19 17:07:26.869320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.869332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.869343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-19 17:07:26.869354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-19 17:07:26.869433 | orchestrator | 2025-09-19 17:07:26.869443 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.869453 | orchestrator | Friday 19 September 2025 17:05:29 +0000 (0:00:02.398) 0:00:43.119 ****** 2025-09-19 17:07:26.869463 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.869479 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.869551 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.869564 | orchestrator | 2025-09-19 17:07:26.869574 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-19 17:07:26.869583 | orchestrator | Friday 19 September 2025 17:05:30 +0000 (0:00:00.288) 0:00:43.407 ****** 2025-09-19 17:07:26.869593 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.869602 | orchestrator | 2025-09-19 17:07:26.869612 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-19 17:07:26.869622 | orchestrator | Friday 19 September 2025 17:05:32 +0000 (0:00:02.396) 0:00:45.803 ****** 2025-09-19 17:07:26.869631 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.869640 | orchestrator | 2025-09-19 17:07:26.869650 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-19 17:07:26.869660 | orchestrator | Friday 19 September 2025 17:05:34 +0000 (0:00:02.286) 0:00:48.090 ****** 2025-09-19 17:07:26.869669 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.869679 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.869688 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.869698 | orchestrator | 2025-09-19 17:07:26.869707 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-19 17:07:26.869717 | orchestrator | Friday 19 September 2025 17:05:35 +0000 (0:00:00.800) 0:00:48.891 ****** 2025-09-19 17:07:26.869726 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.869736 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.869745 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.869755 | orchestrator | 2025-09-19 17:07:26.869764 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-19 17:07:26.869774 | orchestrator | Friday 19 September 2025 17:05:36 +0000 (0:00:00.579) 0:00:49.471 ****** 2025-09-19 17:07:26.869783 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.869793 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.869802 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.869812 | orchestrator | 2025-09-19 17:07:26.869821 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-19 17:07:26.869831 | orchestrator | Friday 19 September 2025 17:05:36 +0000 (0:00:00.355) 0:00:49.826 ****** 2025-09-19 17:07:26.869841 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.869850 | orchestrator | 2025-09-19 17:07:26.869860 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-19 17:07:26.869869 | orchestrator | Friday 19 September 2025 17:05:51 +0000 (0:00:14.529) 0:01:04.356 ****** 2025-09-19 17:07:26.869878 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.869906 | orchestrator | 2025-09-19 17:07:26.869917 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 17:07:26.869931 | orchestrator | Friday 19 September 2025 17:06:01 +0000 (0:00:10.807) 0:01:15.163 ****** 2025-09-19 17:07:26.869941 | orchestrator | 2025-09-19 17:07:26.869951 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 17:07:26.869961 | orchestrator | Friday 19 September 2025 17:06:01 +0000 (0:00:00.058) 0:01:15.222 ****** 2025-09-19 17:07:26.869971 | orchestrator | 2025-09-19 17:07:26.869980 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-19 17:07:26.869990 | orchestrator | Friday 19 September 2025 17:06:01 +0000 (0:00:00.057) 0:01:15.279 ****** 2025-09-19 17:07:26.869999 | orchestrator | 2025-09-19 17:07:26.870009 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-19 17:07:26.870063 | orchestrator | Friday 19 September 2025 17:06:02 +0000 (0:00:00.061) 0:01:15.341 ****** 2025-09-19 17:07:26.870073 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.870083 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:07:26.870092 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:07:26.870102 | orchestrator | 2025-09-19 17:07:26.870111 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-19 17:07:26.870121 | orchestrator | Friday 19 September 2025 17:06:21 +0000 (0:00:19.880) 0:01:35.221 ****** 2025-09-19 17:07:26.870138 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.870147 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:07:26.870157 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:07:26.870166 | orchestrator | 2025-09-19 17:07:26.870176 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-19 17:07:26.870186 | orchestrator | Friday 19 September 2025 17:06:26 +0000 (0:00:04.684) 0:01:39.906 ****** 2025-09-19 17:07:26.870195 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.870205 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:07:26.870214 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:07:26.870224 | orchestrator | 2025-09-19 17:07:26.870233 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.870243 | orchestrator | Friday 19 September 2025 17:06:37 +0000 (0:00:10.946) 0:01:50.853 ****** 2025-09-19 17:07:26.870252 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:07:26.870262 | orchestrator | 2025-09-19 17:07:26.870272 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-19 17:07:26.870281 | orchestrator | Friday 19 September 2025 17:06:38 +0000 (0:00:00.634) 0:01:51.487 ****** 2025-09-19 17:07:26.870291 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.870301 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:07:26.870310 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:07:26.870320 | orchestrator | 2025-09-19 17:07:26.870329 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-19 17:07:26.870339 | orchestrator | Friday 19 September 2025 17:06:38 +0000 (0:00:00.706) 0:01:52.194 ****** 2025-09-19 17:07:26.870349 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:07:26.870358 | orchestrator | 2025-09-19 17:07:26.870368 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-19 17:07:26.870378 | orchestrator | Friday 19 September 2025 17:06:40 +0000 (0:00:01.892) 0:01:54.086 ****** 2025-09-19 17:07:26.870387 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-19 17:07:26.870397 | orchestrator | 2025-09-19 17:07:26.870407 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-19 17:07:26.870420 | orchestrator | Friday 19 September 2025 17:06:51 +0000 (0:00:10.628) 0:02:04.714 ****** 2025-09-19 17:07:26.870444 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-19 17:07:26.870460 | orchestrator | 2025-09-19 17:07:26.870475 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-19 17:07:26.870490 | orchestrator | Friday 19 September 2025 17:07:14 +0000 (0:00:22.745) 0:02:27.460 ****** 2025-09-19 17:07:26.870506 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-19 17:07:26.870522 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-19 17:07:26.870538 | orchestrator | 2025-09-19 17:07:26.870554 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-19 17:07:26.870570 | orchestrator | Friday 19 September 2025 17:07:21 +0000 (0:00:07.064) 0:02:34.525 ****** 2025-09-19 17:07:26.870585 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.870602 | orchestrator | 2025-09-19 17:07:26.870617 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-19 17:07:26.870632 | orchestrator | Friday 19 September 2025 17:07:21 +0000 (0:00:00.122) 0:02:34.648 ****** 2025-09-19 17:07:26.870646 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.870662 | orchestrator | 2025-09-19 17:07:26.870678 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-19 17:07:26.870695 | orchestrator | Friday 19 September 2025 17:07:21 +0000 (0:00:00.107) 0:02:34.755 ****** 2025-09-19 17:07:26.870711 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.870727 | orchestrator | 2025-09-19 17:07:26.870745 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-19 17:07:26.870773 | orchestrator | Friday 19 September 2025 17:07:21 +0000 (0:00:00.127) 0:02:34.882 ****** 2025-09-19 17:07:26.870789 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.870805 | orchestrator | 2025-09-19 17:07:26.870821 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-19 17:07:26.870838 | orchestrator | Friday 19 September 2025 17:07:21 +0000 (0:00:00.392) 0:02:35.275 ****** 2025-09-19 17:07:26.870855 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:07:26.870872 | orchestrator | 2025-09-19 17:07:26.870960 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-19 17:07:26.870980 | orchestrator | Friday 19 September 2025 17:07:25 +0000 (0:00:03.391) 0:02:38.666 ****** 2025-09-19 17:07:26.870996 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:07:26.871010 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:07:26.871026 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:07:26.871042 | orchestrator | 2025-09-19 17:07:26.871068 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:07:26.871087 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-19 17:07:26.871104 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 17:07:26.871120 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-19 17:07:26.871134 | orchestrator | 2025-09-19 17:07:26.871149 | orchestrator | 2025-09-19 17:07:26.871164 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:07:26.871179 | orchestrator | Friday 19 September 2025 17:07:25 +0000 (0:00:00.408) 0:02:39.074 ****** 2025-09-19 17:07:26.871194 | orchestrator | =============================================================================== 2025-09-19 17:07:26.871208 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.75s 2025-09-19 17:07:26.871223 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.88s 2025-09-19 17:07:26.871239 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.53s 2025-09-19 17:07:26.871255 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.95s 2025-09-19 17:07:26.871271 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.81s 2025-09-19 17:07:26.871287 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.63s 2025-09-19 17:07:26.871304 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.13s 2025-09-19 17:07:26.871321 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.06s 2025-09-19 17:07:26.871337 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.35s 2025-09-19 17:07:26.871354 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 4.68s 2025-09-19 17:07:26.871371 | orchestrator | keystone : Creating default user role ----------------------------------- 3.39s 2025-09-19 17:07:26.871388 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.34s 2025-09-19 17:07:26.871405 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.10s 2025-09-19 17:07:26.871422 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.84s 2025-09-19 17:07:26.871439 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.40s 2025-09-19 17:07:26.871454 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.40s 2025-09-19 17:07:26.871470 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-09-19 17:07:26.871485 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.29s 2025-09-19 17:07:26.871498 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.97s 2025-09-19 17:07:26.871522 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.89s 2025-09-19 17:07:26.871550 | orchestrator | 2025-09-19 17:07:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:29.845114 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:29.845489 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:29.846546 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task 773f9449-6b23-4761-b726-14ba16c6dd70 is in state SUCCESS 2025-09-19 17:07:29.848938 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:29.849595 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:29.850209 | orchestrator | 2025-09-19 17:07:29 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:29.850412 | orchestrator | 2025-09-19 17:07:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:32.876015 | orchestrator | 2025-09-19 17:07:32 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:32.877653 | orchestrator | 2025-09-19 17:07:32 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:32.880439 | orchestrator | 2025-09-19 17:07:32 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:32.883067 | orchestrator | 2025-09-19 17:07:32 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:32.886093 | orchestrator | 2025-09-19 17:07:32 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:32.886514 | orchestrator | 2025-09-19 17:07:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:35.935857 | orchestrator | 2025-09-19 17:07:35 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:35.937002 | orchestrator | 2025-09-19 17:07:35 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:35.941312 | orchestrator | 2025-09-19 17:07:35 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:35.942520 | orchestrator | 2025-09-19 17:07:35 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:35.943694 | orchestrator | 2025-09-19 17:07:35 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:35.944120 | orchestrator | 2025-09-19 17:07:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:38.983156 | orchestrator | 2025-09-19 17:07:38 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:38.984974 | orchestrator | 2025-09-19 17:07:38 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:38.986952 | orchestrator | 2025-09-19 17:07:38 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:38.988672 | orchestrator | 2025-09-19 17:07:38 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:38.990724 | orchestrator | 2025-09-19 17:07:38 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:38.990760 | orchestrator | 2025-09-19 17:07:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:42.033254 | orchestrator | 2025-09-19 17:07:42 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:42.034595 | orchestrator | 2025-09-19 17:07:42 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:42.036872 | orchestrator | 2025-09-19 17:07:42 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:42.040979 | orchestrator | 2025-09-19 17:07:42 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:42.043083 | orchestrator | 2025-09-19 17:07:42 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:42.043111 | orchestrator | 2025-09-19 17:07:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:45.084129 | orchestrator | 2025-09-19 17:07:45 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:45.086217 | orchestrator | 2025-09-19 17:07:45 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:45.088393 | orchestrator | 2025-09-19 17:07:45 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:45.090849 | orchestrator | 2025-09-19 17:07:45 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:45.093264 | orchestrator | 2025-09-19 17:07:45 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:45.093319 | orchestrator | 2025-09-19 17:07:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:48.135372 | orchestrator | 2025-09-19 17:07:48 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:48.138214 | orchestrator | 2025-09-19 17:07:48 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:48.140206 | orchestrator | 2025-09-19 17:07:48 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:48.141811 | orchestrator | 2025-09-19 17:07:48 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:48.143719 | orchestrator | 2025-09-19 17:07:48 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:48.143748 | orchestrator | 2025-09-19 17:07:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:51.193704 | orchestrator | 2025-09-19 17:07:51 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:51.197016 | orchestrator | 2025-09-19 17:07:51 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:51.199077 | orchestrator | 2025-09-19 17:07:51 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:51.201598 | orchestrator | 2025-09-19 17:07:51 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:51.203061 | orchestrator | 2025-09-19 17:07:51 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:51.203083 | orchestrator | 2025-09-19 17:07:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:54.247091 | orchestrator | 2025-09-19 17:07:54 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:54.249270 | orchestrator | 2025-09-19 17:07:54 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:54.254707 | orchestrator | 2025-09-19 17:07:54 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:54.257025 | orchestrator | 2025-09-19 17:07:54 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:54.259120 | orchestrator | 2025-09-19 17:07:54 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:54.259164 | orchestrator | 2025-09-19 17:07:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:07:57.302323 | orchestrator | 2025-09-19 17:07:57 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:07:57.304560 | orchestrator | 2025-09-19 17:07:57 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:07:57.307036 | orchestrator | 2025-09-19 17:07:57 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:07:57.308749 | orchestrator | 2025-09-19 17:07:57 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:07:57.310965 | orchestrator | 2025-09-19 17:07:57 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:07:57.311185 | orchestrator | 2025-09-19 17:07:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:00.354090 | orchestrator | 2025-09-19 17:08:00 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:00.357642 | orchestrator | 2025-09-19 17:08:00 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:00.360469 | orchestrator | 2025-09-19 17:08:00 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:00.363078 | orchestrator | 2025-09-19 17:08:00 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:00.365968 | orchestrator | 2025-09-19 17:08:00 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:00.366565 | orchestrator | 2025-09-19 17:08:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:03.407665 | orchestrator | 2025-09-19 17:08:03 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:03.409169 | orchestrator | 2025-09-19 17:08:03 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:03.411227 | orchestrator | 2025-09-19 17:08:03 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:03.413272 | orchestrator | 2025-09-19 17:08:03 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:03.416050 | orchestrator | 2025-09-19 17:08:03 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:03.416092 | orchestrator | 2025-09-19 17:08:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:06.457549 | orchestrator | 2025-09-19 17:08:06 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:06.460216 | orchestrator | 2025-09-19 17:08:06 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:06.462541 | orchestrator | 2025-09-19 17:08:06 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:06.464767 | orchestrator | 2025-09-19 17:08:06 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:06.466512 | orchestrator | 2025-09-19 17:08:06 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:06.466825 | orchestrator | 2025-09-19 17:08:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:09.506140 | orchestrator | 2025-09-19 17:08:09 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:09.506265 | orchestrator | 2025-09-19 17:08:09 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:09.506998 | orchestrator | 2025-09-19 17:08:09 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:09.507888 | orchestrator | 2025-09-19 17:08:09 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:09.508772 | orchestrator | 2025-09-19 17:08:09 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:09.508812 | orchestrator | 2025-09-19 17:08:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:12.558325 | orchestrator | 2025-09-19 17:08:12 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:12.558787 | orchestrator | 2025-09-19 17:08:12 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:12.559558 | orchestrator | 2025-09-19 17:08:12 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:12.560140 | orchestrator | 2025-09-19 17:08:12 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:12.561038 | orchestrator | 2025-09-19 17:08:12 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:12.561063 | orchestrator | 2025-09-19 17:08:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:15.599171 | orchestrator | 2025-09-19 17:08:15 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:15.599689 | orchestrator | 2025-09-19 17:08:15 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:15.602181 | orchestrator | 2025-09-19 17:08:15 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:15.602850 | orchestrator | 2025-09-19 17:08:15 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:15.604516 | orchestrator | 2025-09-19 17:08:15 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:15.604541 | orchestrator | 2025-09-19 17:08:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:18.629418 | orchestrator | 2025-09-19 17:08:18 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:18.630646 | orchestrator | 2025-09-19 17:08:18 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:18.632081 | orchestrator | 2025-09-19 17:08:18 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:18.632937 | orchestrator | 2025-09-19 17:08:18 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:18.635651 | orchestrator | 2025-09-19 17:08:18 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:18.635677 | orchestrator | 2025-09-19 17:08:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:21.671702 | orchestrator | 2025-09-19 17:08:21 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:21.672431 | orchestrator | 2025-09-19 17:08:21 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:21.673286 | orchestrator | 2025-09-19 17:08:21 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:21.675362 | orchestrator | 2025-09-19 17:08:21 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:21.677418 | orchestrator | 2025-09-19 17:08:21 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:21.677455 | orchestrator | 2025-09-19 17:08:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:24.712547 | orchestrator | 2025-09-19 17:08:24 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:24.712772 | orchestrator | 2025-09-19 17:08:24 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:24.713459 | orchestrator | 2025-09-19 17:08:24 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:24.714204 | orchestrator | 2025-09-19 17:08:24 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:24.714881 | orchestrator | 2025-09-19 17:08:24 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:24.714961 | orchestrator | 2025-09-19 17:08:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:27.744605 | orchestrator | 2025-09-19 17:08:27 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:27.744838 | orchestrator | 2025-09-19 17:08:27 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:27.745583 | orchestrator | 2025-09-19 17:08:27 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:27.746091 | orchestrator | 2025-09-19 17:08:27 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:27.746732 | orchestrator | 2025-09-19 17:08:27 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:27.746757 | orchestrator | 2025-09-19 17:08:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:30.784325 | orchestrator | 2025-09-19 17:08:30 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:30.787261 | orchestrator | 2025-09-19 17:08:30 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:30.788814 | orchestrator | 2025-09-19 17:08:30 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:30.790585 | orchestrator | 2025-09-19 17:08:30 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:30.792215 | orchestrator | 2025-09-19 17:08:30 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:30.792240 | orchestrator | 2025-09-19 17:08:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:33.818720 | orchestrator | 2025-09-19 17:08:33 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:33.819564 | orchestrator | 2025-09-19 17:08:33 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:33.820711 | orchestrator | 2025-09-19 17:08:33 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:33.821357 | orchestrator | 2025-09-19 17:08:33 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:33.822123 | orchestrator | 2025-09-19 17:08:33 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:33.822150 | orchestrator | 2025-09-19 17:08:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:36.847386 | orchestrator | 2025-09-19 17:08:36 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:36.847729 | orchestrator | 2025-09-19 17:08:36 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:36.848387 | orchestrator | 2025-09-19 17:08:36 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:36.849019 | orchestrator | 2025-09-19 17:08:36 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:36.849853 | orchestrator | 2025-09-19 17:08:36 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:36.849925 | orchestrator | 2025-09-19 17:08:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:39.879827 | orchestrator | 2025-09-19 17:08:39 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:39.880135 | orchestrator | 2025-09-19 17:08:39 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:39.881465 | orchestrator | 2025-09-19 17:08:39 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:39.882080 | orchestrator | 2025-09-19 17:08:39 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:39.882822 | orchestrator | 2025-09-19 17:08:39 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:39.882853 | orchestrator | 2025-09-19 17:08:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:42.917054 | orchestrator | 2025-09-19 17:08:42 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:42.917159 | orchestrator | 2025-09-19 17:08:42 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:42.917591 | orchestrator | 2025-09-19 17:08:42 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:42.918184 | orchestrator | 2025-09-19 17:08:42 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:42.919006 | orchestrator | 2025-09-19 17:08:42 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:42.919052 | orchestrator | 2025-09-19 17:08:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:45.947735 | orchestrator | 2025-09-19 17:08:45 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:45.948422 | orchestrator | 2025-09-19 17:08:45 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:45.952583 | orchestrator | 2025-09-19 17:08:45 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:45.953349 | orchestrator | 2025-09-19 17:08:45 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:45.954226 | orchestrator | 2025-09-19 17:08:45 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:45.954380 | orchestrator | 2025-09-19 17:08:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:48.983511 | orchestrator | 2025-09-19 17:08:48 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:48.983732 | orchestrator | 2025-09-19 17:08:48 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:48.984350 | orchestrator | 2025-09-19 17:08:48 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:48.986370 | orchestrator | 2025-09-19 17:08:48 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:48.986697 | orchestrator | 2025-09-19 17:08:48 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:48.986724 | orchestrator | 2025-09-19 17:08:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:52.022408 | orchestrator | 2025-09-19 17:08:52 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:52.022583 | orchestrator | 2025-09-19 17:08:52 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:52.022874 | orchestrator | 2025-09-19 17:08:52 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:52.023405 | orchestrator | 2025-09-19 17:08:52 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:52.024873 | orchestrator | 2025-09-19 17:08:52 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:52.024969 | orchestrator | 2025-09-19 17:08:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:55.063647 | orchestrator | 2025-09-19 17:08:55 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state STARTED 2025-09-19 17:08:55.064058 | orchestrator | 2025-09-19 17:08:55 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:55.064745 | orchestrator | 2025-09-19 17:08:55 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:55.065843 | orchestrator | 2025-09-19 17:08:55 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:55.068294 | orchestrator | 2025-09-19 17:08:55 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:55.068328 | orchestrator | 2025-09-19 17:08:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:08:58.111714 | orchestrator | 2025-09-19 17:08:58.111828 | orchestrator | 2025-09-19 17:08:58.111845 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-19 17:08:58.111858 | orchestrator | 2025-09-19 17:08:58.111870 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-19 17:08:58.111882 | orchestrator | Friday 19 September 2025 17:06:38 +0000 (0:00:00.214) 0:00:00.214 ****** 2025-09-19 17:08:58.111893 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-19 17:08:58.111958 | orchestrator | 2025-09-19 17:08:58.111970 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-19 17:08:58.111981 | orchestrator | Friday 19 September 2025 17:06:38 +0000 (0:00:00.201) 0:00:00.416 ****** 2025-09-19 17:08:58.111993 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-19 17:08:58.112004 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-19 17:08:58.112015 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-19 17:08:58.112027 | orchestrator | 2025-09-19 17:08:58.112037 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-19 17:08:58.112048 | orchestrator | Friday 19 September 2025 17:06:39 +0000 (0:00:01.087) 0:00:01.504 ****** 2025-09-19 17:08:58.112060 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-19 17:08:58.112071 | orchestrator | 2025-09-19 17:08:58.112082 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-19 17:08:58.112093 | orchestrator | Friday 19 September 2025 17:06:40 +0000 (0:00:01.049) 0:00:02.553 ****** 2025-09-19 17:08:58.112104 | orchestrator | changed: [testbed-manager] 2025-09-19 17:08:58.112115 | orchestrator | 2025-09-19 17:08:58.112126 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-19 17:08:58.112137 | orchestrator | Friday 19 September 2025 17:06:41 +0000 (0:00:00.862) 0:00:03.415 ****** 2025-09-19 17:08:58.112148 | orchestrator | changed: [testbed-manager] 2025-09-19 17:08:58.112159 | orchestrator | 2025-09-19 17:08:58.112170 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-19 17:08:58.112181 | orchestrator | Friday 19 September 2025 17:06:42 +0000 (0:00:00.772) 0:00:04.188 ****** 2025-09-19 17:08:58.112192 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-19 17:08:58.112203 | orchestrator | ok: [testbed-manager] 2025-09-19 17:08:58.112215 | orchestrator | 2025-09-19 17:08:58.112244 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-19 17:08:58.112256 | orchestrator | Friday 19 September 2025 17:07:18 +0000 (0:00:36.537) 0:00:40.726 ****** 2025-09-19 17:08:58.112267 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-19 17:08:58.112279 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-19 17:08:58.112313 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-19 17:08:58.112324 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-19 17:08:58.112335 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-19 17:08:58.112346 | orchestrator | 2025-09-19 17:08:58.112356 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-19 17:08:58.112367 | orchestrator | Friday 19 September 2025 17:07:22 +0000 (0:00:03.656) 0:00:44.382 ****** 2025-09-19 17:08:58.112378 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-19 17:08:58.112389 | orchestrator | 2025-09-19 17:08:58.112400 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-19 17:08:58.112411 | orchestrator | Friday 19 September 2025 17:07:22 +0000 (0:00:00.429) 0:00:44.812 ****** 2025-09-19 17:08:58.112422 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:08:58.112433 | orchestrator | 2025-09-19 17:08:58.112444 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-19 17:08:58.112455 | orchestrator | Friday 19 September 2025 17:07:22 +0000 (0:00:00.141) 0:00:44.953 ****** 2025-09-19 17:08:58.112465 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:08:58.112476 | orchestrator | 2025-09-19 17:08:58.112487 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-19 17:08:58.112498 | orchestrator | Friday 19 September 2025 17:07:23 +0000 (0:00:00.331) 0:00:45.285 ****** 2025-09-19 17:08:58.112508 | orchestrator | changed: [testbed-manager] 2025-09-19 17:08:58.112519 | orchestrator | 2025-09-19 17:08:58.112530 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-19 17:08:58.112541 | orchestrator | Friday 19 September 2025 17:07:24 +0000 (0:00:01.694) 0:00:46.980 ****** 2025-09-19 17:08:58.112552 | orchestrator | changed: [testbed-manager] 2025-09-19 17:08:58.112563 | orchestrator | 2025-09-19 17:08:58.112574 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-19 17:08:58.112584 | orchestrator | Friday 19 September 2025 17:07:25 +0000 (0:00:00.663) 0:00:47.643 ****** 2025-09-19 17:08:58.112595 | orchestrator | changed: [testbed-manager] 2025-09-19 17:08:58.112606 | orchestrator | 2025-09-19 17:08:58.112617 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-19 17:08:58.112628 | orchestrator | Friday 19 September 2025 17:07:26 +0000 (0:00:00.574) 0:00:48.218 ****** 2025-09-19 17:08:58.112638 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-19 17:08:58.112649 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-19 17:08:58.112660 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-19 17:08:58.112671 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-19 17:08:58.112681 | orchestrator | 2025-09-19 17:08:58.112692 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:08:58.112704 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:08:58.112716 | orchestrator | 2025-09-19 17:08:58.112726 | orchestrator | 2025-09-19 17:08:58.112755 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:08:58.112766 | orchestrator | Friday 19 September 2025 17:07:28 +0000 (0:00:02.029) 0:00:50.248 ****** 2025-09-19 17:08:58.112777 | orchestrator | =============================================================================== 2025-09-19 17:08:58.112788 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.54s 2025-09-19 17:08:58.112799 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.66s 2025-09-19 17:08:58.112810 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 2.03s 2025-09-19 17:08:58.112820 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.69s 2025-09-19 17:08:58.112831 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.09s 2025-09-19 17:08:58.112842 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.05s 2025-09-19 17:08:58.112861 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.86s 2025-09-19 17:08:58.112872 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.77s 2025-09-19 17:08:58.112883 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.66s 2025-09-19 17:08:58.112893 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.57s 2025-09-19 17:08:58.112932 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.43s 2025-09-19 17:08:58.112943 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.33s 2025-09-19 17:08:58.112954 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-09-19 17:08:58.112965 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-09-19 17:08:58.112975 | orchestrator | 2025-09-19 17:08:58.112986 | orchestrator | 2025-09-19 17:08:58.112997 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-19 17:08:58.113008 | orchestrator | 2025-09-19 17:08:58.113019 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-19 17:08:58.113030 | orchestrator | Friday 19 September 2025 17:07:30 +0000 (0:00:00.167) 0:00:00.167 ****** 2025-09-19 17:08:58.113041 | orchestrator | changed: [localhost] 2025-09-19 17:08:58.113051 | orchestrator | 2025-09-19 17:08:58.113062 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-19 17:08:58.113073 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.952) 0:00:01.120 ****** 2025-09-19 17:08:58.113089 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-19 17:08:58.113101 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2025-09-19 17:08:58.113111 | orchestrator | changed: [localhost] 2025-09-19 17:08:58.113122 | orchestrator | 2025-09-19 17:08:58.113133 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-19 17:08:58.113144 | orchestrator | Friday 19 September 2025 17:08:50 +0000 (0:01:18.843) 0:01:19.964 ****** 2025-09-19 17:08:58.113155 | orchestrator | changed: [localhost] 2025-09-19 17:08:58.113166 | orchestrator | 2025-09-19 17:08:58.113177 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:08:58.113187 | orchestrator | 2025-09-19 17:08:58.113198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:08:58.113209 | orchestrator | Friday 19 September 2025 17:08:56 +0000 (0:00:05.908) 0:01:25.872 ****** 2025-09-19 17:08:58.113220 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:08:58.113231 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:08:58.113242 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:08:58.113252 | orchestrator | 2025-09-19 17:08:58.113263 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:08:58.113274 | orchestrator | Friday 19 September 2025 17:08:56 +0000 (0:00:00.322) 0:01:26.195 ****** 2025-09-19 17:08:58.113285 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-19 17:08:58.113296 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-19 17:08:58.113307 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-19 17:08:58.113317 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-19 17:08:58.113328 | orchestrator | 2025-09-19 17:08:58.113339 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-19 17:08:58.113350 | orchestrator | skipping: no hosts matched 2025-09-19 17:08:58.113361 | orchestrator | 2025-09-19 17:08:58.113372 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:08:58.113383 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:08:58.113394 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:08:58.113414 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:08:58.113425 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:08:58.113436 | orchestrator | 2025-09-19 17:08:58.113447 | orchestrator | 2025-09-19 17:08:58.113458 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:08:58.113469 | orchestrator | Friday 19 September 2025 17:08:57 +0000 (0:00:00.405) 0:01:26.601 ****** 2025-09-19 17:08:58.113479 | orchestrator | =============================================================================== 2025-09-19 17:08:58.113490 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 78.84s 2025-09-19 17:08:58.113507 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.91s 2025-09-19 17:08:58.113518 | orchestrator | Ensure the destination directory exists --------------------------------- 0.95s 2025-09-19 17:08:58.113529 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-09-19 17:08:58.113539 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-19 17:08:58.113550 | orchestrator | 2025-09-19 17:08:58 | INFO  | Task c6dd5989-b935-4461-83a9-4f69bd7660d4 is in state SUCCESS 2025-09-19 17:08:58.115454 | orchestrator | 2025-09-19 17:08:58 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:08:58.115488 | orchestrator | 2025-09-19 17:08:58 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:08:58.115505 | orchestrator | 2025-09-19 17:08:58 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:08:58.115522 | orchestrator | 2025-09-19 17:08:58 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:08:58.115539 | orchestrator | 2025-09-19 17:08:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:01.149630 | orchestrator | 2025-09-19 17:09:01 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:01.150157 | orchestrator | 2025-09-19 17:09:01 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:01.150891 | orchestrator | 2025-09-19 17:09:01 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:09:01.151711 | orchestrator | 2025-09-19 17:09:01 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:01.152485 | orchestrator | 2025-09-19 17:09:01 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:01.152506 | orchestrator | 2025-09-19 17:09:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:04.184688 | orchestrator | 2025-09-19 17:09:04 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:04.185105 | orchestrator | 2025-09-19 17:09:04 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:04.185597 | orchestrator | 2025-09-19 17:09:04 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state STARTED 2025-09-19 17:09:04.186216 | orchestrator | 2025-09-19 17:09:04 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:04.187212 | orchestrator | 2025-09-19 17:09:04 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:04.187244 | orchestrator | 2025-09-19 17:09:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:07.210761 | orchestrator | 2025-09-19 17:09:07 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:07.210884 | orchestrator | 2025-09-19 17:09:07 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:07.213605 | orchestrator | 2025-09-19 17:09:07 | INFO  | Task 5c1bbade-42f1-4c82-b506-83fef4578f40 is in state SUCCESS 2025-09-19 17:09:07.213935 | orchestrator | 2025-09-19 17:09:07 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:07.214670 | orchestrator | 2025-09-19 17:09:07 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:07.214699 | orchestrator | 2025-09-19 17:09:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:10.244968 | orchestrator | 2025-09-19 17:09:10 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:10.245695 | orchestrator | 2025-09-19 17:09:10 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:10.246588 | orchestrator | 2025-09-19 17:09:10 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:10.247817 | orchestrator | 2025-09-19 17:09:10 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:10.247861 | orchestrator | 2025-09-19 17:09:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:13.276535 | orchestrator | 2025-09-19 17:09:13 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:13.276706 | orchestrator | 2025-09-19 17:09:13 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:13.277379 | orchestrator | 2025-09-19 17:09:13 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:13.277838 | orchestrator | 2025-09-19 17:09:13 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:13.277867 | orchestrator | 2025-09-19 17:09:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:16.304435 | orchestrator | 2025-09-19 17:09:16 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:16.304555 | orchestrator | 2025-09-19 17:09:16 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:16.304981 | orchestrator | 2025-09-19 17:09:16 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:16.305502 | orchestrator | 2025-09-19 17:09:16 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:16.305537 | orchestrator | 2025-09-19 17:09:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:19.334350 | orchestrator | 2025-09-19 17:09:19 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:19.335137 | orchestrator | 2025-09-19 17:09:19 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:19.336241 | orchestrator | 2025-09-19 17:09:19 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:19.336831 | orchestrator | 2025-09-19 17:09:19 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:19.336981 | orchestrator | 2025-09-19 17:09:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:22.366859 | orchestrator | 2025-09-19 17:09:22 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:22.367003 | orchestrator | 2025-09-19 17:09:22 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:22.367019 | orchestrator | 2025-09-19 17:09:22 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:22.367059 | orchestrator | 2025-09-19 17:09:22 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:22.367085 | orchestrator | 2025-09-19 17:09:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:25.392073 | orchestrator | 2025-09-19 17:09:25 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:25.393982 | orchestrator | 2025-09-19 17:09:25 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:25.395629 | orchestrator | 2025-09-19 17:09:25 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:25.397140 | orchestrator | 2025-09-19 17:09:25 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:25.397242 | orchestrator | 2025-09-19 17:09:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:28.425966 | orchestrator | 2025-09-19 17:09:28 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:28.426783 | orchestrator | 2025-09-19 17:09:28 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:28.427643 | orchestrator | 2025-09-19 17:09:28 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:28.428701 | orchestrator | 2025-09-19 17:09:28 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:28.428719 | orchestrator | 2025-09-19 17:09:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:31.458989 | orchestrator | 2025-09-19 17:09:31 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:31.460517 | orchestrator | 2025-09-19 17:09:31 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:31.462374 | orchestrator | 2025-09-19 17:09:31 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:31.462817 | orchestrator | 2025-09-19 17:09:31 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:31.462843 | orchestrator | 2025-09-19 17:09:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:34.497480 | orchestrator | 2025-09-19 17:09:34 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:34.497586 | orchestrator | 2025-09-19 17:09:34 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state STARTED 2025-09-19 17:09:34.497601 | orchestrator | 2025-09-19 17:09:34 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:34.497613 | orchestrator | 2025-09-19 17:09:34 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:34.497625 | orchestrator | 2025-09-19 17:09:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:37.517720 | orchestrator | 2025-09-19 17:09:37 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:37.518633 | orchestrator | 2025-09-19 17:09:37.518686 | orchestrator | 2025-09-19 17:09:37.518700 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-19 17:09:37.518712 | orchestrator | 2025-09-19 17:09:37.518724 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-19 17:09:37.518736 | orchestrator | Friday 19 September 2025 17:07:32 +0000 (0:00:00.254) 0:00:00.254 ****** 2025-09-19 17:09:37.518748 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.518760 | orchestrator | 2025-09-19 17:09:37.518771 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-19 17:09:37.518783 | orchestrator | Friday 19 September 2025 17:07:33 +0000 (0:00:01.264) 0:00:01.518 ****** 2025-09-19 17:09:37.518794 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.518832 | orchestrator | 2025-09-19 17:09:37.518844 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-19 17:09:37.518855 | orchestrator | Friday 19 September 2025 17:07:34 +0000 (0:00:00.903) 0:00:02.421 ****** 2025-09-19 17:09:37.518866 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.518876 | orchestrator | 2025-09-19 17:09:37.518887 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-19 17:09:37.518898 | orchestrator | Friday 19 September 2025 17:07:35 +0000 (0:00:00.889) 0:00:03.311 ****** 2025-09-19 17:09:37.519001 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519025 | orchestrator | 2025-09-19 17:09:37.519042 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-19 17:09:37.519060 | orchestrator | Friday 19 September 2025 17:07:36 +0000 (0:00:01.021) 0:00:04.333 ****** 2025-09-19 17:09:37.519086 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519268 | orchestrator | 2025-09-19 17:09:37.519283 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-19 17:09:37.519294 | orchestrator | Friday 19 September 2025 17:07:37 +0000 (0:00:00.922) 0:00:05.255 ****** 2025-09-19 17:09:37.519305 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519319 | orchestrator | 2025-09-19 17:09:37.519338 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-19 17:09:37.519356 | orchestrator | Friday 19 September 2025 17:07:37 +0000 (0:00:00.930) 0:00:06.186 ****** 2025-09-19 17:09:37.519374 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519398 | orchestrator | 2025-09-19 17:09:37.519429 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-19 17:09:37.519453 | orchestrator | Friday 19 September 2025 17:07:39 +0000 (0:00:01.157) 0:00:07.344 ****** 2025-09-19 17:09:37.519490 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519567 | orchestrator | 2025-09-19 17:09:37.519580 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-19 17:09:37.519591 | orchestrator | Friday 19 September 2025 17:07:40 +0000 (0:00:01.064) 0:00:08.408 ****** 2025-09-19 17:09:37.519602 | orchestrator | changed: [testbed-manager] 2025-09-19 17:09:37.519613 | orchestrator | 2025-09-19 17:09:37.519624 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-19 17:09:37.519635 | orchestrator | Friday 19 September 2025 17:08:41 +0000 (0:01:01.561) 0:01:09.969 ****** 2025-09-19 17:09:37.519645 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:09:37.519656 | orchestrator | 2025-09-19 17:09:37.519698 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 17:09:37.519711 | orchestrator | 2025-09-19 17:09:37.519768 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 17:09:37.519781 | orchestrator | Friday 19 September 2025 17:08:41 +0000 (0:00:00.198) 0:01:10.167 ****** 2025-09-19 17:09:37.519792 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.519803 | orchestrator | 2025-09-19 17:09:37.519814 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 17:09:37.519829 | orchestrator | 2025-09-19 17:09:37.519849 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 17:09:37.519866 | orchestrator | Friday 19 September 2025 17:08:53 +0000 (0:00:11.834) 0:01:22.002 ****** 2025-09-19 17:09:37.519885 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:09:37.519925 | orchestrator | 2025-09-19 17:09:37.519946 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-19 17:09:37.519973 | orchestrator | 2025-09-19 17:09:37.519993 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-19 17:09:37.520012 | orchestrator | Friday 19 September 2025 17:09:05 +0000 (0:00:11.304) 0:01:33.306 ****** 2025-09-19 17:09:37.520037 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:09:37.520059 | orchestrator | 2025-09-19 17:09:37.520077 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:09:37.520095 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-19 17:09:37.520139 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:09:37.520162 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:09:37.520181 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:09:37.520199 | orchestrator | 2025-09-19 17:09:37.520216 | orchestrator | 2025-09-19 17:09:37.520234 | orchestrator | 2025-09-19 17:09:37.520254 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:09:37.520272 | orchestrator | Friday 19 September 2025 17:09:06 +0000 (0:00:01.211) 0:01:34.517 ****** 2025-09-19 17:09:37.520292 | orchestrator | =============================================================================== 2025-09-19 17:09:37.520311 | orchestrator | Create admin user ------------------------------------------------------ 61.56s 2025-09-19 17:09:37.520330 | orchestrator | Restart ceph manager service ------------------------------------------- 24.35s 2025-09-19 17:09:37.520371 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.26s 2025-09-19 17:09:37.520383 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.16s 2025-09-19 17:09:37.520394 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.06s 2025-09-19 17:09:37.520405 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.02s 2025-09-19 17:09:37.520415 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.93s 2025-09-19 17:09:37.520426 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.92s 2025-09-19 17:09:37.520437 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.90s 2025-09-19 17:09:37.520448 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.89s 2025-09-19 17:09:37.520459 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.20s 2025-09-19 17:09:37.520469 | orchestrator | 2025-09-19 17:09:37.520481 | orchestrator | 2025-09-19 17:09:37 | INFO  | Task a22c2029-82ff-476d-a2b4-64d81caaa354 is in state SUCCESS 2025-09-19 17:09:37.521044 | orchestrator | 2025-09-19 17:09:37.521070 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:09:37.521081 | orchestrator | 2025-09-19 17:09:37.521092 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:09:37.521103 | orchestrator | Friday 19 September 2025 17:07:30 +0000 (0:00:00.233) 0:00:00.233 ****** 2025-09-19 17:09:37.521114 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:09:37.521125 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:09:37.521136 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:09:37.521146 | orchestrator | 2025-09-19 17:09:37.521157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:09:37.521168 | orchestrator | Friday 19 September 2025 17:07:30 +0000 (0:00:00.315) 0:00:00.548 ****** 2025-09-19 17:09:37.521179 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-19 17:09:37.521190 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-19 17:09:37.521201 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-19 17:09:37.521212 | orchestrator | 2025-09-19 17:09:37.521223 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-19 17:09:37.521233 | orchestrator | 2025-09-19 17:09:37.521244 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 17:09:37.521266 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.540) 0:00:01.088 ****** 2025-09-19 17:09:37.521277 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:09:37.521301 | orchestrator | 2025-09-19 17:09:37.521312 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-19 17:09:37.521323 | orchestrator | Friday 19 September 2025 17:07:32 +0000 (0:00:00.543) 0:00:01.631 ****** 2025-09-19 17:09:37.521334 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-19 17:09:37.521371 | orchestrator | 2025-09-19 17:09:37.521382 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-19 17:09:37.521393 | orchestrator | Friday 19 September 2025 17:07:35 +0000 (0:00:03.163) 0:00:04.795 ****** 2025-09-19 17:09:37.521404 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-19 17:09:37.521415 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-19 17:09:37.521426 | orchestrator | 2025-09-19 17:09:37.521437 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-19 17:09:37.521447 | orchestrator | Friday 19 September 2025 17:07:42 +0000 (0:00:07.072) 0:00:11.867 ****** 2025-09-19 17:09:37.521458 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-19 17:09:37.521469 | orchestrator | 2025-09-19 17:09:37.521479 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-19 17:09:37.521490 | orchestrator | Friday 19 September 2025 17:07:45 +0000 (0:00:03.448) 0:00:15.316 ****** 2025-09-19 17:09:37.521501 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:09:37.521512 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-19 17:09:37.521523 | orchestrator | 2025-09-19 17:09:37.521533 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-19 17:09:37.521544 | orchestrator | Friday 19 September 2025 17:07:49 +0000 (0:00:04.075) 0:00:19.392 ****** 2025-09-19 17:09:37.521555 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:09:37.521566 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-19 17:09:37.521577 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-19 17:09:37.521588 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-19 17:09:37.521598 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-19 17:09:37.521609 | orchestrator | 2025-09-19 17:09:37.521620 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-19 17:09:37.521631 | orchestrator | Friday 19 September 2025 17:08:08 +0000 (0:00:19.017) 0:00:38.409 ****** 2025-09-19 17:09:37.521642 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-19 17:09:37.521653 | orchestrator | 2025-09-19 17:09:37.521663 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-19 17:09:37.521674 | orchestrator | Friday 19 September 2025 17:08:13 +0000 (0:00:04.942) 0:00:43.351 ****** 2025-09-19 17:09:37.521689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.521716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.521741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.521754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521779 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.521846 | orchestrator | 2025-09-19 17:09:37.521858 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-19 17:09:37.521869 | orchestrator | Friday 19 September 2025 17:08:15 +0000 (0:00:02.220) 0:00:45.572 ****** 2025-09-19 17:09:37.521880 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-19 17:09:37.521891 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-19 17:09:37.521901 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-19 17:09:37.521984 | orchestrator | 2025-09-19 17:09:37.521995 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-19 17:09:37.522006 | orchestrator | Friday 19 September 2025 17:08:17 +0000 (0:00:01.512) 0:00:47.084 ****** 2025-09-19 17:09:37.522066 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.522078 | orchestrator | 2025-09-19 17:09:37.522089 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-19 17:09:37.522100 | orchestrator | Friday 19 September 2025 17:08:17 +0000 (0:00:00.113) 0:00:47.197 ****** 2025-09-19 17:09:37.522111 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.522122 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.522133 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.522144 | orchestrator | 2025-09-19 17:09:37.522155 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 17:09:37.522166 | orchestrator | Friday 19 September 2025 17:08:17 +0000 (0:00:00.326) 0:00:47.524 ****** 2025-09-19 17:09:37.522177 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:09:37.522188 | orchestrator | 2025-09-19 17:09:37.522199 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-19 17:09:37.522210 | orchestrator | Friday 19 September 2025 17:08:18 +0000 (0:00:00.529) 0:00:48.054 ****** 2025-09-19 17:09:37.522222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522359 | orchestrator | 2025-09-19 17:09:37.522370 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-19 17:09:37.522380 | orchestrator | Friday 19 September 2025 17:08:22 +0000 (0:00:03.861) 0:00:51.917 ****** 2025-09-19 17:09:37.522390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522428 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.522444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.522502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522567 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.522584 | orchestrator | 2025-09-19 17:09:37.522610 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-19 17:09:37.522628 | orchestrator | Friday 19 September 2025 17:08:23 +0000 (0:00:01.368) 0:00:53.286 ****** 2025-09-19 17:09:37.522656 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522742 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.522752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522762 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.522782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.522793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.522820 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.522830 | orchestrator | 2025-09-19 17:09:37.522840 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-19 17:09:37.522850 | orchestrator | Friday 19 September 2025 17:08:25 +0000 (0:00:01.369) 0:00:54.656 ****** 2025-09-19 17:09:37.522860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.522900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.522997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523007 | orchestrator | 2025-09-19 17:09:37.523016 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-19 17:09:37.523026 | orchestrator | Friday 19 September 2025 17:08:29 +0000 (0:00:04.132) 0:00:58.789 ****** 2025-09-19 17:09:37.523036 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.523046 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:09:37.523060 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:09:37.523070 | orchestrator | 2025-09-19 17:09:37.523080 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-19 17:09:37.523090 | orchestrator | Friday 19 September 2025 17:08:32 +0000 (0:00:02.954) 0:01:01.743 ****** 2025-09-19 17:09:37.523099 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:09:37.523109 | orchestrator | 2025-09-19 17:09:37.523119 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-19 17:09:37.523128 | orchestrator | Friday 19 September 2025 17:08:33 +0000 (0:00:01.608) 0:01:03.352 ****** 2025-09-19 17:09:37.523138 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.523148 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.523164 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.523174 | orchestrator | 2025-09-19 17:09:37.523183 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-19 17:09:37.523193 | orchestrator | Friday 19 September 2025 17:08:34 +0000 (0:00:00.493) 0:01:03.845 ****** 2025-09-19 17:09:37.523203 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523304 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523314 | orchestrator | 2025-09-19 17:09:37.523324 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-19 17:09:37.523339 | orchestrator | Friday 19 September 2025 17:08:45 +0000 (0:00:11.680) 0:01:15.529 ****** 2025-09-19 17:09:37.523353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.523373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523393 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.523403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.523414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523440 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.523455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-19 17:09:37.523475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:09:37.523495 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.523505 | orchestrator | 2025-09-19 17:09:37.523515 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-19 17:09:37.523525 | orchestrator | Friday 19 September 2025 17:08:47 +0000 (0:00:01.189) 0:01:16.719 ****** 2025-09-19 17:09:37.523535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-19 17:09:37.523592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:09:37.523681 | orchestrator | 2025-09-19 17:09:37.523697 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-19 17:09:37.523712 | orchestrator | Friday 19 September 2025 17:08:51 +0000 (0:00:04.758) 0:01:21.477 ****** 2025-09-19 17:09:37.523728 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:09:37.523745 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:09:37.523763 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:09:37.523779 | orchestrator | 2025-09-19 17:09:37.523796 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-19 17:09:37.523808 | orchestrator | Friday 19 September 2025 17:08:52 +0000 (0:00:00.793) 0:01:22.270 ****** 2025-09-19 17:09:37.523818 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.523833 | orchestrator | 2025-09-19 17:09:37.523850 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-19 17:09:37.523865 | orchestrator | Friday 19 September 2025 17:08:55 +0000 (0:00:02.840) 0:01:25.111 ****** 2025-09-19 17:09:37.523880 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.523896 | orchestrator | 2025-09-19 17:09:37.523934 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-19 17:09:37.523950 | orchestrator | Friday 19 September 2025 17:08:58 +0000 (0:00:03.231) 0:01:28.342 ****** 2025-09-19 17:09:37.523967 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.523984 | orchestrator | 2025-09-19 17:09:37.524000 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 17:09:37.524017 | orchestrator | Friday 19 September 2025 17:09:11 +0000 (0:00:12.747) 0:01:41.089 ****** 2025-09-19 17:09:37.524027 | orchestrator | 2025-09-19 17:09:37.524037 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 17:09:37.524046 | orchestrator | Friday 19 September 2025 17:09:11 +0000 (0:00:00.196) 0:01:41.286 ****** 2025-09-19 17:09:37.524056 | orchestrator | 2025-09-19 17:09:37.524065 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-19 17:09:37.524075 | orchestrator | Friday 19 September 2025 17:09:11 +0000 (0:00:00.143) 0:01:41.429 ****** 2025-09-19 17:09:37.524084 | orchestrator | 2025-09-19 17:09:37.524094 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-19 17:09:37.524103 | orchestrator | Friday 19 September 2025 17:09:11 +0000 (0:00:00.135) 0:01:41.565 ****** 2025-09-19 17:09:37.524112 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:09:37.524122 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:09:37.524131 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.524141 | orchestrator | 2025-09-19 17:09:37.524150 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-19 17:09:37.524160 | orchestrator | Friday 19 September 2025 17:09:20 +0000 (0:00:08.313) 0:01:49.881 ****** 2025-09-19 17:09:37.524169 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.524179 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:09:37.524188 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:09:37.524197 | orchestrator | 2025-09-19 17:09:37.524207 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-19 17:09:37.524217 | orchestrator | Friday 19 September 2025 17:09:25 +0000 (0:00:05.521) 0:01:55.403 ****** 2025-09-19 17:09:37.524235 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:09:37.524244 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:09:37.524254 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:09:37.524263 | orchestrator | 2025-09-19 17:09:37.524273 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:09:37.524284 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 17:09:37.524294 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:09:37.524304 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:09:37.524314 | orchestrator | 2025-09-19 17:09:37.524323 | orchestrator | 2025-09-19 17:09:37.524333 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:09:37.524342 | orchestrator | Friday 19 September 2025 17:09:36 +0000 (0:00:10.519) 0:02:05.923 ****** 2025-09-19 17:09:37.524352 | orchestrator | =============================================================================== 2025-09-19 17:09:37.524361 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 19.02s 2025-09-19 17:09:37.524378 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.75s 2025-09-19 17:09:37.524388 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.68s 2025-09-19 17:09:37.524398 | orchestrator | barbican : Restart barbican-worker container --------------------------- 10.52s 2025-09-19 17:09:37.524407 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.32s 2025-09-19 17:09:37.524417 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.07s 2025-09-19 17:09:37.524426 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.52s 2025-09-19 17:09:37.524436 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.94s 2025-09-19 17:09:37.524446 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.76s 2025-09-19 17:09:37.524455 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.13s 2025-09-19 17:09:37.524465 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.08s 2025-09-19 17:09:37.524474 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.86s 2025-09-19 17:09:37.524490 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.45s 2025-09-19 17:09:37.524500 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 3.23s 2025-09-19 17:09:37.524509 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.16s 2025-09-19 17:09:37.524519 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.96s 2025-09-19 17:09:37.524528 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.84s 2025-09-19 17:09:37.524538 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.22s 2025-09-19 17:09:37.524548 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.61s 2025-09-19 17:09:37.524557 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.51s 2025-09-19 17:09:37.524567 | orchestrator | 2025-09-19 17:09:37 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:37.524576 | orchestrator | 2025-09-19 17:09:37 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:37.524586 | orchestrator | 2025-09-19 17:09:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:40.539183 | orchestrator | 2025-09-19 17:09:40 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:40.539828 | orchestrator | 2025-09-19 17:09:40 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:40.539896 | orchestrator | 2025-09-19 17:09:40 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:40.540617 | orchestrator | 2025-09-19 17:09:40 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:40.540641 | orchestrator | 2025-09-19 17:09:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:43.570584 | orchestrator | 2025-09-19 17:09:43 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:43.571329 | orchestrator | 2025-09-19 17:09:43 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:43.571364 | orchestrator | 2025-09-19 17:09:43 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:43.571785 | orchestrator | 2025-09-19 17:09:43 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:43.571806 | orchestrator | 2025-09-19 17:09:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:46.593600 | orchestrator | 2025-09-19 17:09:46 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:46.593732 | orchestrator | 2025-09-19 17:09:46 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:46.594543 | orchestrator | 2025-09-19 17:09:46 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:46.595341 | orchestrator | 2025-09-19 17:09:46 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:46.595368 | orchestrator | 2025-09-19 17:09:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:49.618503 | orchestrator | 2025-09-19 17:09:49 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:49.620389 | orchestrator | 2025-09-19 17:09:49 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:49.620423 | orchestrator | 2025-09-19 17:09:49 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:49.620435 | orchestrator | 2025-09-19 17:09:49 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:49.621130 | orchestrator | 2025-09-19 17:09:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:52.642437 | orchestrator | 2025-09-19 17:09:52 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:52.642559 | orchestrator | 2025-09-19 17:09:52 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:52.643977 | orchestrator | 2025-09-19 17:09:52 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:52.644512 | orchestrator | 2025-09-19 17:09:52 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:52.644537 | orchestrator | 2025-09-19 17:09:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:55.693247 | orchestrator | 2025-09-19 17:09:55 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:55.695363 | orchestrator | 2025-09-19 17:09:55 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:55.697151 | orchestrator | 2025-09-19 17:09:55 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:55.699227 | orchestrator | 2025-09-19 17:09:55 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:55.699302 | orchestrator | 2025-09-19 17:09:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:09:58.747119 | orchestrator | 2025-09-19 17:09:58 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:09:58.749548 | orchestrator | 2025-09-19 17:09:58 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:09:58.751123 | orchestrator | 2025-09-19 17:09:58 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:09:58.752430 | orchestrator | 2025-09-19 17:09:58 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:09:58.752706 | orchestrator | 2025-09-19 17:09:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:01.785420 | orchestrator | 2025-09-19 17:10:01 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:01.785671 | orchestrator | 2025-09-19 17:10:01 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:01.787272 | orchestrator | 2025-09-19 17:10:01 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:01.787304 | orchestrator | 2025-09-19 17:10:01 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:01.787316 | orchestrator | 2025-09-19 17:10:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:04.833087 | orchestrator | 2025-09-19 17:10:04 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:04.835386 | orchestrator | 2025-09-19 17:10:04 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:04.837425 | orchestrator | 2025-09-19 17:10:04 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:04.839447 | orchestrator | 2025-09-19 17:10:04 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:04.839569 | orchestrator | 2025-09-19 17:10:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:07.878088 | orchestrator | 2025-09-19 17:10:07 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:07.880389 | orchestrator | 2025-09-19 17:10:07 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:07.882530 | orchestrator | 2025-09-19 17:10:07 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:07.884111 | orchestrator | 2025-09-19 17:10:07 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:07.884231 | orchestrator | 2025-09-19 17:10:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:10.925232 | orchestrator | 2025-09-19 17:10:10 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:10.925321 | orchestrator | 2025-09-19 17:10:10 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:10.926307 | orchestrator | 2025-09-19 17:10:10 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:10.928023 | orchestrator | 2025-09-19 17:10:10 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:10.928057 | orchestrator | 2025-09-19 17:10:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:13.949853 | orchestrator | 2025-09-19 17:10:13 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:13.950177 | orchestrator | 2025-09-19 17:10:13 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:13.950921 | orchestrator | 2025-09-19 17:10:13 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:13.951463 | orchestrator | 2025-09-19 17:10:13 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:13.951472 | orchestrator | 2025-09-19 17:10:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:16.998793 | orchestrator | 2025-09-19 17:10:16 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state STARTED 2025-09-19 17:10:17.001152 | orchestrator | 2025-09-19 17:10:16 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:17.003495 | orchestrator | 2025-09-19 17:10:17 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:17.005417 | orchestrator | 2025-09-19 17:10:17 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:17.006007 | orchestrator | 2025-09-19 17:10:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:20.052578 | orchestrator | 2025-09-19 17:10:20.052680 | orchestrator | 2025-09-19 17:10:20.052703 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:10:20.052720 | orchestrator | 2025-09-19 17:10:20.052736 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:10:20.052752 | orchestrator | Friday 19 September 2025 17:09:02 +0000 (0:00:00.396) 0:00:00.396 ****** 2025-09-19 17:10:20.052768 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:10:20.052778 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:10:20.052787 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:10:20.052796 | orchestrator | 2025-09-19 17:10:20.052805 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:10:20.052814 | orchestrator | Friday 19 September 2025 17:09:03 +0000 (0:00:00.692) 0:00:01.088 ****** 2025-09-19 17:10:20.052823 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-19 17:10:20.052832 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-19 17:10:20.052841 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-19 17:10:20.052849 | orchestrator | 2025-09-19 17:10:20.052858 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-19 17:10:20.052866 | orchestrator | 2025-09-19 17:10:20.052875 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 17:10:20.052884 | orchestrator | Friday 19 September 2025 17:09:04 +0000 (0:00:00.690) 0:00:01.779 ****** 2025-09-19 17:10:20.052892 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:10:20.052902 | orchestrator | 2025-09-19 17:10:20.052961 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-19 17:10:20.052973 | orchestrator | Friday 19 September 2025 17:09:04 +0000 (0:00:00.615) 0:00:02.394 ****** 2025-09-19 17:10:20.052984 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-19 17:10:20.052995 | orchestrator | 2025-09-19 17:10:20.053006 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-19 17:10:20.053017 | orchestrator | Friday 19 September 2025 17:09:08 +0000 (0:00:03.932) 0:00:06.327 ****** 2025-09-19 17:10:20.053027 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-19 17:10:20.053039 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-19 17:10:20.053049 | orchestrator | 2025-09-19 17:10:20.053060 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-19 17:10:20.053071 | orchestrator | Friday 19 September 2025 17:09:16 +0000 (0:00:07.383) 0:00:13.711 ****** 2025-09-19 17:10:20.053082 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:10:20.053093 | orchestrator | 2025-09-19 17:10:20.053104 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-19 17:10:20.053115 | orchestrator | Friday 19 September 2025 17:09:20 +0000 (0:00:03.956) 0:00:17.672 ****** 2025-09-19 17:10:20.053154 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:10:20.053166 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-19 17:10:20.053177 | orchestrator | 2025-09-19 17:10:20.053188 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-19 17:10:20.053199 | orchestrator | Friday 19 September 2025 17:09:24 +0000 (0:00:04.281) 0:00:21.953 ****** 2025-09-19 17:10:20.053210 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:10:20.053221 | orchestrator | 2025-09-19 17:10:20.053231 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-19 17:10:20.053242 | orchestrator | Friday 19 September 2025 17:09:28 +0000 (0:00:03.851) 0:00:25.805 ****** 2025-09-19 17:10:20.053253 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-19 17:10:20.053264 | orchestrator | 2025-09-19 17:10:20.053275 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 17:10:20.053285 | orchestrator | Friday 19 September 2025 17:09:33 +0000 (0:00:05.160) 0:00:30.965 ****** 2025-09-19 17:10:20.053296 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.053307 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:20.053318 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:20.053328 | orchestrator | 2025-09-19 17:10:20.053339 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-19 17:10:20.053350 | orchestrator | Friday 19 September 2025 17:09:34 +0000 (0:00:00.569) 0:00:31.535 ****** 2025-09-19 17:10:20.053379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053449 | orchestrator | 2025-09-19 17:10:20.053460 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-19 17:10:20.053471 | orchestrator | Friday 19 September 2025 17:09:35 +0000 (0:00:01.307) 0:00:32.842 ****** 2025-09-19 17:10:20.053482 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.053493 | orchestrator | 2025-09-19 17:10:20.053504 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-19 17:10:20.053515 | orchestrator | Friday 19 September 2025 17:09:35 +0000 (0:00:00.108) 0:00:32.951 ****** 2025-09-19 17:10:20.053526 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.053537 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:20.053548 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:20.053558 | orchestrator | 2025-09-19 17:10:20.053570 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-19 17:10:20.053581 | orchestrator | Friday 19 September 2025 17:09:35 +0000 (0:00:00.335) 0:00:33.286 ****** 2025-09-19 17:10:20.053592 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:10:20.053603 | orchestrator | 2025-09-19 17:10:20.053614 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-19 17:10:20.053625 | orchestrator | Friday 19 September 2025 17:09:36 +0000 (0:00:00.441) 0:00:33.727 ****** 2025-09-19 17:10:20.053636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053694 | orchestrator | 2025-09-19 17:10:20.053705 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-19 17:10:20.053716 | orchestrator | Friday 19 September 2025 17:09:37 +0000 (0:00:01.605) 0:00:35.333 ****** 2025-09-19 17:10:20.053728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053739 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.053751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053762 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:20.053785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053797 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:20.053808 | orchestrator | 2025-09-19 17:10:20.053819 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-19 17:10:20.053830 | orchestrator | Friday 19 September 2025 17:09:38 +0000 (0:00:00.888) 0:00:36.222 ****** 2025-09-19 17:10:20.053841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053859 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.053871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053882 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:20.053894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.053905 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:20.053939 | orchestrator | 2025-09-19 17:10:20.053951 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-19 17:10:20.053962 | orchestrator | Friday 19 September 2025 17:09:39 +0000 (0:00:01.059) 0:00:37.282 ****** 2025-09-19 17:10:20.053984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.053997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054094 | orchestrator | 2025-09-19 17:10:20.054105 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-19 17:10:20.054116 | orchestrator | Friday 19 September 2025 17:09:41 +0000 (0:00:01.947) 0:00:39.229 ****** 2025-09-19 17:10:20.054127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054186 | orchestrator | 2025-09-19 17:10:20.054198 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-19 17:10:20.054209 | orchestrator | Friday 19 September 2025 17:09:45 +0000 (0:00:03.992) 0:00:43.222 ****** 2025-09-19 17:10:20.054220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 17:10:20.054230 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 17:10:20.054241 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-19 17:10:20.054252 | orchestrator | 2025-09-19 17:10:20.054263 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-19 17:10:20.054274 | orchestrator | Friday 19 September 2025 17:09:47 +0000 (0:00:02.001) 0:00:45.224 ****** 2025-09-19 17:10:20.054285 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:20.054296 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:20.054306 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:20.054317 | orchestrator | 2025-09-19 17:10:20.054328 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-19 17:10:20.054339 | orchestrator | Friday 19 September 2025 17:09:49 +0000 (0:00:01.791) 0:00:47.015 ****** 2025-09-19 17:10:20.054350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.054362 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:20.054373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.054385 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:20.054407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-19 17:10:20.054426 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:20.054436 | orchestrator | 2025-09-19 17:10:20.054447 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-19 17:10:20.054458 | orchestrator | Friday 19 September 2025 17:09:50 +0000 (0:00:01.053) 0:00:48.069 ****** 2025-09-19 17:10:20.054470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-19 17:10:20.054504 | orchestrator | 2025-09-19 17:10:20.054515 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-19 17:10:20.054538 | orchestrator | Friday 19 September 2025 17:09:52 +0000 (0:00:01.406) 0:00:49.475 ****** 2025-09-19 17:10:20.054549 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:20.054560 | orchestrator | 2025-09-19 17:10:20.054571 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-19 17:10:20.054581 | orchestrator | Friday 19 September 2025 17:09:54 +0000 (0:00:02.493) 0:00:51.969 ****** 2025-09-19 17:10:20.054592 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:20.054603 | orchestrator | 2025-09-19 17:10:20.054614 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-19 17:10:20.054625 | orchestrator | Friday 19 September 2025 17:09:56 +0000 (0:00:02.344) 0:00:54.313 ****** 2025-09-19 17:10:20.054635 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:20.054646 | orchestrator | 2025-09-19 17:10:20.054657 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 17:10:20.054811 | orchestrator | Friday 19 September 2025 17:10:11 +0000 (0:00:14.374) 0:01:08.688 ****** 2025-09-19 17:10:20.054822 | orchestrator | 2025-09-19 17:10:20.054833 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 17:10:20.054844 | orchestrator | Friday 19 September 2025 17:10:11 +0000 (0:00:00.058) 0:01:08.746 ****** 2025-09-19 17:10:20.054855 | orchestrator | 2025-09-19 17:10:20.054874 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-19 17:10:20.054885 | orchestrator | Friday 19 September 2025 17:10:11 +0000 (0:00:00.075) 0:01:08.821 ****** 2025-09-19 17:10:20.054896 | orchestrator | 2025-09-19 17:10:20.054907 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-19 17:10:20.054974 | orchestrator | Friday 19 September 2025 17:10:11 +0000 (0:00:00.062) 0:01:08.883 ****** 2025-09-19 17:10:20.054992 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:20.055010 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:20.055021 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:20.055032 | orchestrator | 2025-09-19 17:10:20.055043 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:10:20.055055 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:10:20.055067 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 17:10:20.055078 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 17:10:20.055088 | orchestrator | 2025-09-19 17:10:20.055099 | orchestrator | 2025-09-19 17:10:20.055110 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:10:20.055121 | orchestrator | Friday 19 September 2025 17:10:19 +0000 (0:00:07.832) 0:01:16.716 ****** 2025-09-19 17:10:20.055131 | orchestrator | =============================================================================== 2025-09-19 17:10:20.055142 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.37s 2025-09-19 17:10:20.055153 | orchestrator | placement : Restart placement-api container ----------------------------- 7.83s 2025-09-19 17:10:20.055163 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.38s 2025-09-19 17:10:20.055174 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.16s 2025-09-19 17:10:20.055184 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.28s 2025-09-19 17:10:20.055195 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.99s 2025-09-19 17:10:20.055206 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.96s 2025-09-19 17:10:20.055216 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.93s 2025-09-19 17:10:20.055227 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.85s 2025-09-19 17:10:20.055247 | orchestrator | placement : Creating placement databases -------------------------------- 2.49s 2025-09-19 17:10:20.055258 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-09-19 17:10:20.055268 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.00s 2025-09-19 17:10:20.055279 | orchestrator | placement : Copying over config.json files for services ----------------- 1.95s 2025-09-19 17:10:20.055289 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.79s 2025-09-19 17:10:20.055300 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.61s 2025-09-19 17:10:20.055310 | orchestrator | placement : Check placement containers ---------------------------------- 1.41s 2025-09-19 17:10:20.055321 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.31s 2025-09-19 17:10:20.055332 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 1.06s 2025-09-19 17:10:20.055343 | orchestrator | placement : Copying over existing policy file --------------------------- 1.05s 2025-09-19 17:10:20.055353 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.89s 2025-09-19 17:10:20.055364 | orchestrator | 2025-09-19 17:10:20 | INFO  | Task bbdb67c8-55e0-4bdb-9cc3-82fa4c70df80 is in state SUCCESS 2025-09-19 17:10:20.055375 | orchestrator | 2025-09-19 17:10:20 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:20.055386 | orchestrator | 2025-09-19 17:10:20 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:20.056239 | orchestrator | 2025-09-19 17:10:20 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:20.056534 | orchestrator | 2025-09-19 17:10:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:23.104397 | orchestrator | 2025-09-19 17:10:23 | INFO  | Task a28be144-b869-49d4-8a7f-9b49b004d30e is in state STARTED 2025-09-19 17:10:23.107692 | orchestrator | 2025-09-19 17:10:23 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:23.110674 | orchestrator | 2025-09-19 17:10:23 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:23.114064 | orchestrator | 2025-09-19 17:10:23 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:23.114105 | orchestrator | 2025-09-19 17:10:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:26.149359 | orchestrator | 2025-09-19 17:10:26 | INFO  | Task a28be144-b869-49d4-8a7f-9b49b004d30e is in state SUCCESS 2025-09-19 17:10:26.150739 | orchestrator | 2025-09-19 17:10:26 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:26.152552 | orchestrator | 2025-09-19 17:10:26 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:26.154285 | orchestrator | 2025-09-19 17:10:26 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:26.156113 | orchestrator | 2025-09-19 17:10:26 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:26.156148 | orchestrator | 2025-09-19 17:10:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:29.201488 | orchestrator | 2025-09-19 17:10:29 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:29.202951 | orchestrator | 2025-09-19 17:10:29 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state STARTED 2025-09-19 17:10:29.204444 | orchestrator | 2025-09-19 17:10:29 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:29.206154 | orchestrator | 2025-09-19 17:10:29 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:29.206246 | orchestrator | 2025-09-19 17:10:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:32.237803 | orchestrator | 2025-09-19 17:10:32 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:32.237893 | orchestrator | 2025-09-19 17:10:32 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:32.238316 | orchestrator | 2025-09-19 17:10:32 | INFO  | Task 3fbfefbe-e6c6-4843-b773-2833876a8e5f is in state SUCCESS 2025-09-19 17:10:32.241000 | orchestrator | 2025-09-19 17:10:32.241040 | orchestrator | 2025-09-19 17:10:32.241050 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:10:32.241060 | orchestrator | 2025-09-19 17:10:32.241069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:10:32.241078 | orchestrator | Friday 19 September 2025 17:10:23 +0000 (0:00:00.168) 0:00:00.168 ****** 2025-09-19 17:10:32.241087 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:10:32.241097 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:10:32.241165 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:10:32.241176 | orchestrator | 2025-09-19 17:10:32.241186 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:10:32.241195 | orchestrator | Friday 19 September 2025 17:10:23 +0000 (0:00:00.260) 0:00:00.429 ****** 2025-09-19 17:10:32.241204 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-19 17:10:32.241213 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-19 17:10:32.241276 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-19 17:10:32.241287 | orchestrator | 2025-09-19 17:10:32.241296 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-19 17:10:32.241305 | orchestrator | 2025-09-19 17:10:32.241314 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-19 17:10:32.241323 | orchestrator | Friday 19 September 2025 17:10:23 +0000 (0:00:00.519) 0:00:00.949 ****** 2025-09-19 17:10:32.241332 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:10:32.241342 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:10:32.241351 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:10:32.241360 | orchestrator | 2025-09-19 17:10:32.241368 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:10:32.241378 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:10:32.241389 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:10:32.241398 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:10:32.241406 | orchestrator | 2025-09-19 17:10:32.241415 | orchestrator | 2025-09-19 17:10:32.241424 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:10:32.241433 | orchestrator | Friday 19 September 2025 17:10:24 +0000 (0:00:00.740) 0:00:01.689 ****** 2025-09-19 17:10:32.241441 | orchestrator | =============================================================================== 2025-09-19 17:10:32.241450 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.74s 2025-09-19 17:10:32.241459 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.52s 2025-09-19 17:10:32.241468 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.26s 2025-09-19 17:10:32.241476 | orchestrator | 2025-09-19 17:10:32.241485 | orchestrator | 2025-09-19 17:10:32.241493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:10:32.241502 | orchestrator | 2025-09-19 17:10:32.241511 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:10:32.241520 | orchestrator | Friday 19 September 2025 17:07:30 +0000 (0:00:00.267) 0:00:00.267 ****** 2025-09-19 17:10:32.241528 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:10:32.242337 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:10:32.242419 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:10:32.242443 | orchestrator | 2025-09-19 17:10:32.242493 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:10:32.242507 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.385) 0:00:00.653 ****** 2025-09-19 17:10:32.242519 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-19 17:10:32.242531 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-19 17:10:32.242541 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-19 17:10:32.242552 | orchestrator | 2025-09-19 17:10:32.242563 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-19 17:10:32.242574 | orchestrator | 2025-09-19 17:10:32.242585 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 17:10:32.242596 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.609) 0:00:01.262 ****** 2025-09-19 17:10:32.242607 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:10:32.242618 | orchestrator | 2025-09-19 17:10:32.242629 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-19 17:10:32.242648 | orchestrator | Friday 19 September 2025 17:07:32 +0000 (0:00:00.645) 0:00:01.908 ****** 2025-09-19 17:10:32.242668 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-19 17:10:32.242688 | orchestrator | 2025-09-19 17:10:32.242700 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-19 17:10:32.242711 | orchestrator | Friday 19 September 2025 17:07:35 +0000 (0:00:03.267) 0:00:05.175 ****** 2025-09-19 17:10:32.242722 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-19 17:10:32.242733 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-19 17:10:32.242743 | orchestrator | 2025-09-19 17:10:32.242754 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-19 17:10:32.242765 | orchestrator | Friday 19 September 2025 17:07:42 +0000 (0:00:06.880) 0:00:12.056 ****** 2025-09-19 17:10:32.242776 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:10:32.242786 | orchestrator | 2025-09-19 17:10:32.242797 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-19 17:10:32.242808 | orchestrator | Friday 19 September 2025 17:07:46 +0000 (0:00:03.621) 0:00:15.677 ****** 2025-09-19 17:10:32.242867 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:10:32.242882 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-19 17:10:32.242893 | orchestrator | 2025-09-19 17:10:32.242904 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-19 17:10:32.242963 | orchestrator | Friday 19 September 2025 17:07:50 +0000 (0:00:04.144) 0:00:19.822 ****** 2025-09-19 17:10:32.242978 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:10:32.242989 | orchestrator | 2025-09-19 17:10:32.243000 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-19 17:10:32.243011 | orchestrator | Friday 19 September 2025 17:07:53 +0000 (0:00:03.404) 0:00:23.226 ****** 2025-09-19 17:10:32.243022 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-19 17:10:32.243032 | orchestrator | 2025-09-19 17:10:32.243043 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-19 17:10:32.243054 | orchestrator | Friday 19 September 2025 17:07:58 +0000 (0:00:04.643) 0:00:27.869 ****** 2025-09-19 17:10:32.243069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243370 | orchestrator | 2025-09-19 17:10:32.243382 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-19 17:10:32.243393 | orchestrator | Friday 19 September 2025 17:08:01 +0000 (0:00:03.045) 0:00:30.915 ****** 2025-09-19 17:10:32.243404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.243415 | orchestrator | 2025-09-19 17:10:32.243426 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-19 17:10:32.243444 | orchestrator | Friday 19 September 2025 17:08:01 +0000 (0:00:00.127) 0:00:31.043 ****** 2025-09-19 17:10:32.243455 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.243466 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.243476 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.243487 | orchestrator | 2025-09-19 17:10:32.243498 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 17:10:32.243509 | orchestrator | Friday 19 September 2025 17:08:01 +0000 (0:00:00.264) 0:00:31.307 ****** 2025-09-19 17:10:32.243520 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:10:32.243532 | orchestrator | 2025-09-19 17:10:32.243542 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-19 17:10:32.243553 | orchestrator | Friday 19 September 2025 17:08:02 +0000 (0:00:00.724) 0:00:32.031 ****** 2025-09-19 17:10:32.243564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.243618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243817 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.243866 | orchestrator | 2025-09-19 17:10:32.243877 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-19 17:10:32.243888 | orchestrator | Friday 19 September 2025 17:08:09 +0000 (0:00:06.493) 0:00:38.525 ****** 2025-09-19 17:10:32.243900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.243929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.243952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.243964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.243975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.244026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.244037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.244054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244114 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.244125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.244137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.244148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244206 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.244217 | orchestrator | 2025-09-19 17:10:32.244234 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-19 17:10:32.244245 | orchestrator | Friday 19 September 2025 17:08:10 +0000 (0:00:01.121) 0:00:39.647 ****** 2025-09-19 17:10:32.244257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.244269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.244280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244337 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.244355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.244367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.244378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.244456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.244468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.244479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.244536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.244547 | orchestrator | 2025-09-19 17:10:32.244558 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-19 17:10:32.244569 | orchestrator | Friday 19 September 2025 17:08:11 +0000 (0:00:01.740) 0:00:41.387 ****** 2025-09-19 17:10:32.244588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244777 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244834 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244845 | orchestrator | 2025-09-19 17:10:32.244856 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-19 17:10:32.244867 | orchestrator | Friday 19 September 2025 17:08:19 +0000 (0:00:07.754) 0:00:49.142 ****** 2025-09-19 17:10:32.244885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244924 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.244948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244972 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.244990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245090 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245158 | orchestrator | 2025-09-19 17:10:32.245169 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-19 17:10:32.245180 | orchestrator | Friday 19 September 2025 17:08:42 +0000 (0:00:23.138) 0:01:12.281 ****** 2025-09-19 17:10:32.245191 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 17:10:32.245202 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 17:10:32.245213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-19 17:10:32.245224 | orchestrator | 2025-09-19 17:10:32.245235 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-19 17:10:32.245246 | orchestrator | Friday 19 September 2025 17:08:50 +0000 (0:00:08.169) 0:01:20.450 ****** 2025-09-19 17:10:32.245257 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 17:10:32.245267 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 17:10:32.245278 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-19 17:10:32.245289 | orchestrator | 2025-09-19 17:10:32.245300 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-19 17:10:32.245311 | orchestrator | Friday 19 September 2025 17:08:54 +0000 (0:00:03.643) 0:01:24.094 ****** 2025-09-19 17:10:32.245328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245346 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245375 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245694 | orchestrator | 2025-09-19 17:10:32.245705 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-19 17:10:32.245717 | orchestrator | Friday 19 September 2025 17:08:58 +0000 (0:00:04.207) 0:01:28.301 ****** 2025-09-19 17:10:32.245734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.245785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.245987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.245998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246082 | orchestrator | 2025-09-19 17:10:32.246094 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 17:10:32.246105 | orchestrator | Friday 19 September 2025 17:09:02 +0000 (0:00:03.700) 0:01:32.005 ****** 2025-09-19 17:10:32.246116 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.246127 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.246138 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.246149 | orchestrator | 2025-09-19 17:10:32.246160 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-19 17:10:32.246170 | orchestrator | Friday 19 September 2025 17:09:02 +0000 (0:00:00.467) 0:01:32.473 ****** 2025-09-19 17:10:32.246182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.246208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.246221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246272 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.246284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.246311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.246323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-19 17:10:32.246380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-19 17:10:32.246411 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.246422 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246434 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-19 17:10:32.246473 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.246491 | orchestrator | 2025-09-19 17:10:32.246502 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-19 17:10:32.246513 | orchestrator | Friday 19 September 2025 17:09:04 +0000 (0:00:01.517) 0:01:33.991 ****** 2025-09-19 17:10:32.246525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.246543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.246555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-19 17:10:32.246566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-19 17:10:32.246782 | orchestrator | 2025-09-19 17:10:32.246793 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-19 17:10:32.246804 | orchestrator | Friday 19 September 2025 17:09:09 +0000 (0:00:05.085) 0:01:39.077 ****** 2025-09-19 17:10:32.246815 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:10:32.246831 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:10:32.246842 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:10:32.246853 | orchestrator | 2025-09-19 17:10:32.246864 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-19 17:10:32.246875 | orchestrator | Friday 19 September 2025 17:09:10 +0000 (0:00:00.489) 0:01:39.566 ****** 2025-09-19 17:10:32.246886 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-19 17:10:32.246897 | orchestrator | 2025-09-19 17:10:32.246908 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-19 17:10:32.246936 | orchestrator | Friday 19 September 2025 17:09:12 +0000 (0:00:02.338) 0:01:41.904 ****** 2025-09-19 17:10:32.246947 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:10:32.246957 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-19 17:10:32.246968 | orchestrator | 2025-09-19 17:10:32.246979 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-19 17:10:32.246990 | orchestrator | Friday 19 September 2025 17:09:14 +0000 (0:00:02.443) 0:01:44.348 ****** 2025-09-19 17:10:32.247001 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247011 | orchestrator | 2025-09-19 17:10:32.247023 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 17:10:32.247034 | orchestrator | Friday 19 September 2025 17:09:30 +0000 (0:00:15.862) 0:02:00.211 ****** 2025-09-19 17:10:32.247044 | orchestrator | 2025-09-19 17:10:32.247055 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 17:10:32.247066 | orchestrator | Friday 19 September 2025 17:09:30 +0000 (0:00:00.252) 0:02:00.463 ****** 2025-09-19 17:10:32.247077 | orchestrator | 2025-09-19 17:10:32.247089 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-19 17:10:32.247100 | orchestrator | Friday 19 September 2025 17:09:31 +0000 (0:00:00.064) 0:02:00.528 ****** 2025-09-19 17:10:32.247110 | orchestrator | 2025-09-19 17:10:32.247121 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-19 17:10:32.247132 | orchestrator | Friday 19 September 2025 17:09:31 +0000 (0:00:00.064) 0:02:00.593 ****** 2025-09-19 17:10:32.247143 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247154 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247165 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247176 | orchestrator | 2025-09-19 17:10:32.247187 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-19 17:10:32.247198 | orchestrator | Friday 19 September 2025 17:09:40 +0000 (0:00:09.208) 0:02:09.802 ****** 2025-09-19 17:10:32.247215 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247226 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247237 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247248 | orchestrator | 2025-09-19 17:10:32.247260 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-19 17:10:32.247271 | orchestrator | Friday 19 September 2025 17:09:51 +0000 (0:00:10.975) 0:02:20.777 ****** 2025-09-19 17:10:32.247282 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247293 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247304 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247314 | orchestrator | 2025-09-19 17:10:32.247325 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-19 17:10:32.247336 | orchestrator | Friday 19 September 2025 17:10:01 +0000 (0:00:10.151) 0:02:30.929 ****** 2025-09-19 17:10:32.247347 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247358 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247377 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247387 | orchestrator | 2025-09-19 17:10:32.247399 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-19 17:10:32.247410 | orchestrator | Friday 19 September 2025 17:10:11 +0000 (0:00:10.031) 0:02:40.960 ****** 2025-09-19 17:10:32.247421 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247432 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247442 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247453 | orchestrator | 2025-09-19 17:10:32.247464 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-19 17:10:32.247475 | orchestrator | Friday 19 September 2025 17:10:17 +0000 (0:00:05.690) 0:02:46.650 ****** 2025-09-19 17:10:32.247485 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247496 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:10:32.247507 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:10:32.247518 | orchestrator | 2025-09-19 17:10:32.247529 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-19 17:10:32.247539 | orchestrator | Friday 19 September 2025 17:10:22 +0000 (0:00:05.508) 0:02:52.159 ****** 2025-09-19 17:10:32.247550 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:10:32.247561 | orchestrator | 2025-09-19 17:10:32.247572 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:10:32.247583 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 17:10:32.247594 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:10:32.247605 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:10:32.247616 | orchestrator | 2025-09-19 17:10:32.247627 | orchestrator | 2025-09-19 17:10:32.247639 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:10:32.247649 | orchestrator | Friday 19 September 2025 17:10:30 +0000 (0:00:08.032) 0:03:00.192 ****** 2025-09-19 17:10:32.247661 | orchestrator | =============================================================================== 2025-09-19 17:10:32.247672 | orchestrator | designate : Copying over designate.conf -------------------------------- 23.14s 2025-09-19 17:10:32.247683 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.86s 2025-09-19 17:10:32.247699 | orchestrator | designate : Restart designate-api container ---------------------------- 10.98s 2025-09-19 17:10:32.247710 | orchestrator | designate : Restart designate-central container ------------------------ 10.15s 2025-09-19 17:10:32.247720 | orchestrator | designate : Restart designate-producer container ----------------------- 10.03s 2025-09-19 17:10:32.247731 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.21s 2025-09-19 17:10:32.247742 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.17s 2025-09-19 17:10:32.247753 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.03s 2025-09-19 17:10:32.247764 | orchestrator | designate : Copying over config.json files for services ----------------- 7.75s 2025-09-19 17:10:32.247774 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.88s 2025-09-19 17:10:32.247785 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.49s 2025-09-19 17:10:32.247796 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.69s 2025-09-19 17:10:32.247807 | orchestrator | designate : Restart designate-worker container -------------------------- 5.51s 2025-09-19 17:10:32.247817 | orchestrator | designate : Check designate containers ---------------------------------- 5.09s 2025-09-19 17:10:32.247828 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.64s 2025-09-19 17:10:32.247839 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.21s 2025-09-19 17:10:32.247856 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.14s 2025-09-19 17:10:32.247867 | orchestrator | designate : Copying over rndc.key --------------------------------------- 3.70s 2025-09-19 17:10:32.247878 | orchestrator | designate : Copying over named.conf ------------------------------------- 3.64s 2025-09-19 17:10:32.247889 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.62s 2025-09-19 17:10:32.247900 | orchestrator | 2025-09-19 17:10:32 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:32.247926 | orchestrator | 2025-09-19 17:10:32 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:32.247943 | orchestrator | 2025-09-19 17:10:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:35.274905 | orchestrator | 2025-09-19 17:10:35 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:35.275041 | orchestrator | 2025-09-19 17:10:35 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:35.276031 | orchestrator | 2025-09-19 17:10:35 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:35.276508 | orchestrator | 2025-09-19 17:10:35 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:35.276530 | orchestrator | 2025-09-19 17:10:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:38.306625 | orchestrator | 2025-09-19 17:10:38 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:38.308478 | orchestrator | 2025-09-19 17:10:38 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:38.310304 | orchestrator | 2025-09-19 17:10:38 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:38.313974 | orchestrator | 2025-09-19 17:10:38 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:38.314299 | orchestrator | 2025-09-19 17:10:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:41.340254 | orchestrator | 2025-09-19 17:10:41 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:41.340776 | orchestrator | 2025-09-19 17:10:41 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:41.341275 | orchestrator | 2025-09-19 17:10:41 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:41.342007 | orchestrator | 2025-09-19 17:10:41 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:41.342080 | orchestrator | 2025-09-19 17:10:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:44.427569 | orchestrator | 2025-09-19 17:10:44 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:44.428001 | orchestrator | 2025-09-19 17:10:44 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:44.428631 | orchestrator | 2025-09-19 17:10:44 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:44.429426 | orchestrator | 2025-09-19 17:10:44 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:44.429460 | orchestrator | 2025-09-19 17:10:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:47.448680 | orchestrator | 2025-09-19 17:10:47 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:47.448863 | orchestrator | 2025-09-19 17:10:47 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:47.449440 | orchestrator | 2025-09-19 17:10:47 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:47.450164 | orchestrator | 2025-09-19 17:10:47 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:47.450193 | orchestrator | 2025-09-19 17:10:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:50.481950 | orchestrator | 2025-09-19 17:10:50 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:50.482315 | orchestrator | 2025-09-19 17:10:50 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:50.483741 | orchestrator | 2025-09-19 17:10:50 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:50.485578 | orchestrator | 2025-09-19 17:10:50 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:50.485640 | orchestrator | 2025-09-19 17:10:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:53.522760 | orchestrator | 2025-09-19 17:10:53 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:53.524013 | orchestrator | 2025-09-19 17:10:53 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:53.524488 | orchestrator | 2025-09-19 17:10:53 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:53.526726 | orchestrator | 2025-09-19 17:10:53 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:53.526748 | orchestrator | 2025-09-19 17:10:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:56.620027 | orchestrator | 2025-09-19 17:10:56 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:56.620135 | orchestrator | 2025-09-19 17:10:56 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:56.620150 | orchestrator | 2025-09-19 17:10:56 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:56.620162 | orchestrator | 2025-09-19 17:10:56 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:56.620173 | orchestrator | 2025-09-19 17:10:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:10:59.644433 | orchestrator | 2025-09-19 17:10:59 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:10:59.646224 | orchestrator | 2025-09-19 17:10:59 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:10:59.646280 | orchestrator | 2025-09-19 17:10:59 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:10:59.646293 | orchestrator | 2025-09-19 17:10:59 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:10:59.646305 | orchestrator | 2025-09-19 17:10:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:02.670101 | orchestrator | 2025-09-19 17:11:02 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:11:02.670242 | orchestrator | 2025-09-19 17:11:02 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:02.670452 | orchestrator | 2025-09-19 17:11:02 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:02.671160 | orchestrator | 2025-09-19 17:11:02 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:02.671184 | orchestrator | 2025-09-19 17:11:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:05.712278 | orchestrator | 2025-09-19 17:11:05 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:11:05.714169 | orchestrator | 2025-09-19 17:11:05 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:05.715515 | orchestrator | 2025-09-19 17:11:05 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:05.717034 | orchestrator | 2025-09-19 17:11:05 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:05.717439 | orchestrator | 2025-09-19 17:11:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:08.751717 | orchestrator | 2025-09-19 17:11:08 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:11:08.753172 | orchestrator | 2025-09-19 17:11:08 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:08.754851 | orchestrator | 2025-09-19 17:11:08 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:08.756728 | orchestrator | 2025-09-19 17:11:08 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:08.756763 | orchestrator | 2025-09-19 17:11:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:11.791415 | orchestrator | 2025-09-19 17:11:11 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:11:11.792151 | orchestrator | 2025-09-19 17:11:11 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:11.793305 | orchestrator | 2025-09-19 17:11:11 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:11.793621 | orchestrator | 2025-09-19 17:11:11 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:11.793653 | orchestrator | 2025-09-19 17:11:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:14.850817 | orchestrator | 2025-09-19 17:11:14 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state STARTED 2025-09-19 17:11:14.850970 | orchestrator | 2025-09-19 17:11:14 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:14.850989 | orchestrator | 2025-09-19 17:11:14 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:14.851002 | orchestrator | 2025-09-19 17:11:14 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:14.851021 | orchestrator | 2025-09-19 17:11:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:17.857304 | orchestrator | 2025-09-19 17:11:17 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:17.857413 | orchestrator | 2025-09-19 17:11:17 | INFO  | Task b287b0bf-abca-42a5-8047-0ae3f8627fea is in state SUCCESS 2025-09-19 17:11:17.862182 | orchestrator | 2025-09-19 17:11:17 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:17.864483 | orchestrator | 2025-09-19 17:11:17 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:17.864692 | orchestrator | 2025-09-19 17:11:17 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:17.864714 | orchestrator | 2025-09-19 17:11:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:20.906135 | orchestrator | 2025-09-19 17:11:20 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:20.907545 | orchestrator | 2025-09-19 17:11:20 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:20.908730 | orchestrator | 2025-09-19 17:11:20 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:20.910194 | orchestrator | 2025-09-19 17:11:20 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:20.910551 | orchestrator | 2025-09-19 17:11:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:23.950124 | orchestrator | 2025-09-19 17:11:23 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:23.950357 | orchestrator | 2025-09-19 17:11:23 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:23.951335 | orchestrator | 2025-09-19 17:11:23 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:23.952108 | orchestrator | 2025-09-19 17:11:23 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:23.952307 | orchestrator | 2025-09-19 17:11:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:26.995910 | orchestrator | 2025-09-19 17:11:26 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:26.996048 | orchestrator | 2025-09-19 17:11:26 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:26.996063 | orchestrator | 2025-09-19 17:11:26 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:26.996075 | orchestrator | 2025-09-19 17:11:26 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:26.996087 | orchestrator | 2025-09-19 17:11:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:30.044567 | orchestrator | 2025-09-19 17:11:30 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:30.047842 | orchestrator | 2025-09-19 17:11:30 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:30.051255 | orchestrator | 2025-09-19 17:11:30 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:30.054435 | orchestrator | 2025-09-19 17:11:30 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:30.055120 | orchestrator | 2025-09-19 17:11:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:33.115440 | orchestrator | 2025-09-19 17:11:33 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:33.116383 | orchestrator | 2025-09-19 17:11:33 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:33.119082 | orchestrator | 2025-09-19 17:11:33 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:33.120447 | orchestrator | 2025-09-19 17:11:33 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:33.120471 | orchestrator | 2025-09-19 17:11:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:36.178510 | orchestrator | 2025-09-19 17:11:36 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:36.181285 | orchestrator | 2025-09-19 17:11:36 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:36.183462 | orchestrator | 2025-09-19 17:11:36 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:36.185583 | orchestrator | 2025-09-19 17:11:36 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state STARTED 2025-09-19 17:11:36.186250 | orchestrator | 2025-09-19 17:11:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:39.227969 | orchestrator | 2025-09-19 17:11:39 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:39.228265 | orchestrator | 2025-09-19 17:11:39 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:39.229669 | orchestrator | 2025-09-19 17:11:39 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:39.231310 | orchestrator | 2025-09-19 17:11:39 | INFO  | Task 1fe9ffd1-2301-4350-902b-1183f1edaa38 is in state SUCCESS 2025-09-19 17:11:39.235340 | orchestrator | 2025-09-19 17:11:39.235379 | orchestrator | 2025-09-19 17:11:39.235391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:11:39.235403 | orchestrator | 2025-09-19 17:11:39.235414 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:11:39.235425 | orchestrator | Friday 19 September 2025 17:10:35 +0000 (0:00:00.248) 0:00:00.248 ****** 2025-09-19 17:11:39.235436 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:11:39.235447 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:11:39.235458 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:11:39.235469 | orchestrator | ok: [testbed-manager] 2025-09-19 17:11:39.235479 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:11:39.235490 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:11:39.235500 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:11:39.235511 | orchestrator | 2025-09-19 17:11:39.235522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:11:39.235533 | orchestrator | Friday 19 September 2025 17:10:36 +0000 (0:00:01.159) 0:00:01.407 ****** 2025-09-19 17:11:39.235544 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235555 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235565 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235577 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235588 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235598 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235609 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-19 17:11:39.235620 | orchestrator | 2025-09-19 17:11:39.235630 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-19 17:11:39.235641 | orchestrator | 2025-09-19 17:11:39.235652 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-19 17:11:39.235663 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:01.526) 0:00:02.934 ****** 2025-09-19 17:11:39.235675 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:11:39.235687 | orchestrator | 2025-09-19 17:11:39.235698 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-19 17:11:39.235708 | orchestrator | Friday 19 September 2025 17:10:41 +0000 (0:00:03.020) 0:00:05.954 ****** 2025-09-19 17:11:39.235720 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-19 17:11:39.235731 | orchestrator | 2025-09-19 17:11:39.235742 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-19 17:11:39.235752 | orchestrator | Friday 19 September 2025 17:10:44 +0000 (0:00:03.745) 0:00:09.700 ****** 2025-09-19 17:11:39.235779 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-19 17:11:39.235791 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-19 17:11:39.235802 | orchestrator | 2025-09-19 17:11:39.235813 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-19 17:11:39.235823 | orchestrator | Friday 19 September 2025 17:10:52 +0000 (0:00:07.689) 0:00:17.390 ****** 2025-09-19 17:11:39.235834 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:11:39.235845 | orchestrator | 2025-09-19 17:11:39.235855 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-19 17:11:39.235881 | orchestrator | Friday 19 September 2025 17:10:56 +0000 (0:00:03.737) 0:00:21.127 ****** 2025-09-19 17:11:39.235892 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:11:39.235903 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-19 17:11:39.236252 | orchestrator | 2025-09-19 17:11:39.236265 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-19 17:11:39.236276 | orchestrator | Friday 19 September 2025 17:11:00 +0000 (0:00:04.406) 0:00:25.533 ****** 2025-09-19 17:11:39.236287 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:11:39.236297 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-19 17:11:39.236308 | orchestrator | 2025-09-19 17:11:39.236319 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-19 17:11:39.236330 | orchestrator | Friday 19 September 2025 17:11:08 +0000 (0:00:07.565) 0:00:33.099 ****** 2025-09-19 17:11:39.236340 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-19 17:11:39.236350 | orchestrator | 2025-09-19 17:11:39.236361 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:11:39.236372 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236383 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236394 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236404 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236415 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236437 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236449 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:11:39.236459 | orchestrator | 2025-09-19 17:11:39.236470 | orchestrator | 2025-09-19 17:11:39.236481 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:11:39.236492 | orchestrator | Friday 19 September 2025 17:11:14 +0000 (0:00:06.341) 0:00:39.440 ****** 2025-09-19 17:11:39.236502 | orchestrator | =============================================================================== 2025-09-19 17:11:39.236513 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.69s 2025-09-19 17:11:39.236523 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 7.57s 2025-09-19 17:11:39.236534 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.34s 2025-09-19 17:11:39.236544 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.41s 2025-09-19 17:11:39.236555 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.74s 2025-09-19 17:11:39.236565 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.74s 2025-09-19 17:11:39.236576 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 3.02s 2025-09-19 17:11:39.236586 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-09-19 17:11:39.236597 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.16s 2025-09-19 17:11:39.236607 | orchestrator | 2025-09-19 17:11:39.236618 | orchestrator | 2025-09-19 17:11:39.236628 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:11:39.236639 | orchestrator | 2025-09-19 17:11:39.236649 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:11:39.236670 | orchestrator | Friday 19 September 2025 17:09:41 +0000 (0:00:00.418) 0:00:00.418 ****** 2025-09-19 17:11:39.236681 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:11:39.236691 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:11:39.236702 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:11:39.236713 | orchestrator | 2025-09-19 17:11:39.236723 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:11:39.236734 | orchestrator | Friday 19 September 2025 17:09:41 +0000 (0:00:00.361) 0:00:00.780 ****** 2025-09-19 17:11:39.236744 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-19 17:11:39.236755 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-19 17:11:39.236766 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-19 17:11:39.236776 | orchestrator | 2025-09-19 17:11:39.236787 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-19 17:11:39.236797 | orchestrator | 2025-09-19 17:11:39.236815 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 17:11:39.236826 | orchestrator | Friday 19 September 2025 17:09:42 +0000 (0:00:00.729) 0:00:01.510 ****** 2025-09-19 17:11:39.236836 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:11:39.236847 | orchestrator | 2025-09-19 17:11:39.236859 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-19 17:11:39.236871 | orchestrator | Friday 19 September 2025 17:09:44 +0000 (0:00:01.562) 0:00:03.072 ****** 2025-09-19 17:11:39.236883 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-19 17:11:39.236895 | orchestrator | 2025-09-19 17:11:39.236908 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-19 17:11:39.236944 | orchestrator | Friday 19 September 2025 17:09:48 +0000 (0:00:04.209) 0:00:07.281 ****** 2025-09-19 17:11:39.236956 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-19 17:11:39.236968 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-19 17:11:39.236980 | orchestrator | 2025-09-19 17:11:39.236992 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-19 17:11:39.237004 | orchestrator | Friday 19 September 2025 17:09:55 +0000 (0:00:06.960) 0:00:14.242 ****** 2025-09-19 17:11:39.237017 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:11:39.237029 | orchestrator | 2025-09-19 17:11:39.237041 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-19 17:11:39.237053 | orchestrator | Friday 19 September 2025 17:09:58 +0000 (0:00:03.383) 0:00:17.625 ****** 2025-09-19 17:11:39.237065 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:11:39.237077 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-19 17:11:39.237089 | orchestrator | 2025-09-19 17:11:39.237102 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-19 17:11:39.237113 | orchestrator | Friday 19 September 2025 17:10:02 +0000 (0:00:03.924) 0:00:21.550 ****** 2025-09-19 17:11:39.237126 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:11:39.237138 | orchestrator | 2025-09-19 17:11:39.237150 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-19 17:11:39.237162 | orchestrator | Friday 19 September 2025 17:10:06 +0000 (0:00:03.449) 0:00:24.999 ****** 2025-09-19 17:11:39.237174 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-19 17:11:39.237186 | orchestrator | 2025-09-19 17:11:39.237199 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-19 17:11:39.237210 | orchestrator | Friday 19 September 2025 17:10:10 +0000 (0:00:04.471) 0:00:29.471 ****** 2025-09-19 17:11:39.237222 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.237232 | orchestrator | 2025-09-19 17:11:39.237243 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-19 17:11:39.237268 | orchestrator | Friday 19 September 2025 17:10:13 +0000 (0:00:03.485) 0:00:32.956 ****** 2025-09-19 17:11:39.237280 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.237290 | orchestrator | 2025-09-19 17:11:39.237301 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-19 17:11:39.237312 | orchestrator | Friday 19 September 2025 17:10:18 +0000 (0:00:04.167) 0:00:37.124 ****** 2025-09-19 17:11:39.237322 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.237333 | orchestrator | 2025-09-19 17:11:39.237344 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-19 17:11:39.237354 | orchestrator | Friday 19 September 2025 17:10:22 +0000 (0:00:04.096) 0:00:41.220 ****** 2025-09-19 17:11:39.237368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237462 | orchestrator | 2025-09-19 17:11:39.237473 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-19 17:11:39.237483 | orchestrator | Friday 19 September 2025 17:10:23 +0000 (0:00:01.452) 0:00:42.673 ****** 2025-09-19 17:11:39.237494 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.237505 | orchestrator | 2025-09-19 17:11:39.237515 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-19 17:11:39.237526 | orchestrator | Friday 19 September 2025 17:10:23 +0000 (0:00:00.116) 0:00:42.789 ****** 2025-09-19 17:11:39.237537 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.237547 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:11:39.237558 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:11:39.237568 | orchestrator | 2025-09-19 17:11:39.237579 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-19 17:11:39.237590 | orchestrator | Friday 19 September 2025 17:10:24 +0000 (0:00:00.380) 0:00:43.169 ****** 2025-09-19 17:11:39.237600 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:11:39.237611 | orchestrator | 2025-09-19 17:11:39.237626 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-19 17:11:39.237637 | orchestrator | Friday 19 September 2025 17:10:24 +0000 (0:00:00.757) 0:00:43.927 ****** 2025-09-19 17:11:39.237648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237696 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.237741 | orchestrator | 2025-09-19 17:11:39.237752 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-19 17:11:39.237763 | orchestrator | Friday 19 September 2025 17:10:27 +0000 (0:00:02.525) 0:00:46.453 ****** 2025-09-19 17:11:39.237774 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:11:39.237784 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:11:39.237795 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:11:39.237805 | orchestrator | 2025-09-19 17:11:39.237816 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 17:11:39.237827 | orchestrator | Friday 19 September 2025 17:10:27 +0000 (0:00:00.314) 0:00:46.767 ****** 2025-09-19 17:11:39.237838 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:11:39.237848 | orchestrator | 2025-09-19 17:11:39.237859 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-19 17:11:39.237870 | orchestrator | Friday 19 September 2025 17:10:28 +0000 (0:00:00.585) 0:00:47.352 ****** 2025-09-19 17:11:39.237888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.237998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238112 | orchestrator | 2025-09-19 17:11:39.238122 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-19 17:11:39.238132 | orchestrator | Friday 19 September 2025 17:10:30 +0000 (0:00:02.281) 0:00:49.634 ****** 2025-09-19 17:11:39.238142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238173 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.238190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238209 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:11:39.238226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238247 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:11:39.238256 | orchestrator | 2025-09-19 17:11:39.238266 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-19 17:11:39.238276 | orchestrator | Friday 19 September 2025 17:10:31 +0000 (0:00:00.551) 0:00:50.185 ****** 2025-09-19 17:11:39.238290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238318 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.238332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238353 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:11:39.238363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238396 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:11:39.238405 | orchestrator | 2025-09-19 17:11:39.238415 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-19 17:11:39.238424 | orchestrator | Friday 19 September 2025 17:10:32 +0000 (0:00:01.497) 0:00:51.683 ****** 2025-09-19 17:11:39.238434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238513 | orchestrator | 2025-09-19 17:11:39.238523 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-19 17:11:39.238532 | orchestrator | Friday 19 September 2025 17:10:35 +0000 (0:00:02.404) 0:00:54.087 ****** 2025-09-19 17:11:39.238548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238628 | orchestrator | 2025-09-19 17:11:39.238638 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-19 17:11:39.238647 | orchestrator | Friday 19 September 2025 17:10:42 +0000 (0:00:07.374) 0:01:01.462 ****** 2025-09-19 17:11:39.238657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238688 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.238698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238718 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:11:39.238733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-19 17:11:39.238743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:11:39.238760 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:11:39.238770 | orchestrator | 2025-09-19 17:11:39.238779 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-19 17:11:39.238789 | orchestrator | Friday 19 September 2025 17:10:43 +0000 (0:00:01.430) 0:01:02.893 ****** 2025-09-19 17:11:39.238803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-19 17:11:39.238839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:11:39.238883 | orchestrator | 2025-09-19 17:11:39.238893 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-19 17:11:39.238902 | orchestrator | Friday 19 September 2025 17:10:46 +0000 (0:00:02.966) 0:01:05.860 ****** 2025-09-19 17:11:39.238912 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:11:39.238940 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:11:39.238949 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:11:39.238959 | orchestrator | 2025-09-19 17:11:39.238968 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-19 17:11:39.238978 | orchestrator | Friday 19 September 2025 17:10:47 +0000 (0:00:00.412) 0:01:06.272 ****** 2025-09-19 17:11:39.238987 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.238997 | orchestrator | 2025-09-19 17:11:39.239006 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-19 17:11:39.239015 | orchestrator | Friday 19 September 2025 17:10:49 +0000 (0:00:02.632) 0:01:08.904 ****** 2025-09-19 17:11:39.239025 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.239034 | orchestrator | 2025-09-19 17:11:39.239044 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-19 17:11:39.239053 | orchestrator | Friday 19 September 2025 17:10:52 +0000 (0:00:02.635) 0:01:11.540 ****** 2025-09-19 17:11:39.239063 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.239072 | orchestrator | 2025-09-19 17:11:39.239081 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 17:11:39.239091 | orchestrator | Friday 19 September 2025 17:11:08 +0000 (0:00:16.370) 0:01:27.911 ****** 2025-09-19 17:11:39.239100 | orchestrator | 2025-09-19 17:11:39.239110 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 17:11:39.239119 | orchestrator | Friday 19 September 2025 17:11:08 +0000 (0:00:00.059) 0:01:27.970 ****** 2025-09-19 17:11:39.239129 | orchestrator | 2025-09-19 17:11:39.239138 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-19 17:11:39.239147 | orchestrator | Friday 19 September 2025 17:11:09 +0000 (0:00:00.078) 0:01:28.048 ****** 2025-09-19 17:11:39.239157 | orchestrator | 2025-09-19 17:11:39.239172 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-19 17:11:39.239182 | orchestrator | Friday 19 September 2025 17:11:09 +0000 (0:00:00.067) 0:01:28.115 ****** 2025-09-19 17:11:39.239191 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.239201 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:11:39.239210 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:11:39.239220 | orchestrator | 2025-09-19 17:11:39.239229 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-19 17:11:39.239244 | orchestrator | Friday 19 September 2025 17:11:24 +0000 (0:00:15.073) 0:01:43.189 ****** 2025-09-19 17:11:39.239254 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:11:39.239264 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:11:39.239273 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:11:39.239282 | orchestrator | 2025-09-19 17:11:39.239292 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:11:39.239302 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-19 17:11:39.239312 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 17:11:39.239322 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-19 17:11:39.239331 | orchestrator | 2025-09-19 17:11:39.239341 | orchestrator | 2025-09-19 17:11:39.239350 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:11:39.239360 | orchestrator | Friday 19 September 2025 17:11:38 +0000 (0:00:14.079) 0:01:57.269 ****** 2025-09-19 17:11:39.239369 | orchestrator | =============================================================================== 2025-09-19 17:11:39.239379 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.37s 2025-09-19 17:11:39.239388 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 15.07s 2025-09-19 17:11:39.239397 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 14.08s 2025-09-19 17:11:39.239407 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 7.37s 2025-09-19 17:11:39.239416 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.96s 2025-09-19 17:11:39.239426 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.47s 2025-09-19 17:11:39.239435 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 4.21s 2025-09-19 17:11:39.239445 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.17s 2025-09-19 17:11:39.239454 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 4.10s 2025-09-19 17:11:39.239464 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.92s 2025-09-19 17:11:39.239473 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.49s 2025-09-19 17:11:39.239487 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.45s 2025-09-19 17:11:39.239497 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.38s 2025-09-19 17:11:39.239506 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.97s 2025-09-19 17:11:39.239516 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.64s 2025-09-19 17:11:39.239525 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.63s 2025-09-19 17:11:39.239535 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.53s 2025-09-19 17:11:39.239544 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.40s 2025-09-19 17:11:39.239554 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.28s 2025-09-19 17:11:39.239563 | orchestrator | magnum : include_tasks -------------------------------------------------- 1.56s 2025-09-19 17:11:39.239579 | orchestrator | 2025-09-19 17:11:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:42.266622 | orchestrator | 2025-09-19 17:11:42 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:42.267594 | orchestrator | 2025-09-19 17:11:42 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:42.268584 | orchestrator | 2025-09-19 17:11:42 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:42.269702 | orchestrator | 2025-09-19 17:11:42 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:42.269906 | orchestrator | 2025-09-19 17:11:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:45.306354 | orchestrator | 2025-09-19 17:11:45 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:45.309033 | orchestrator | 2025-09-19 17:11:45 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:45.309354 | orchestrator | 2025-09-19 17:11:45 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:45.310239 | orchestrator | 2025-09-19 17:11:45 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:45.310365 | orchestrator | 2025-09-19 17:11:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:48.347574 | orchestrator | 2025-09-19 17:11:48 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:48.348230 | orchestrator | 2025-09-19 17:11:48 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:48.350264 | orchestrator | 2025-09-19 17:11:48 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:48.351065 | orchestrator | 2025-09-19 17:11:48 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:48.351195 | orchestrator | 2025-09-19 17:11:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:51.396402 | orchestrator | 2025-09-19 17:11:51 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:51.397392 | orchestrator | 2025-09-19 17:11:51 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:51.399483 | orchestrator | 2025-09-19 17:11:51 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:51.401990 | orchestrator | 2025-09-19 17:11:51 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:51.402069 | orchestrator | 2025-09-19 17:11:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:54.428161 | orchestrator | 2025-09-19 17:11:54 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:54.428861 | orchestrator | 2025-09-19 17:11:54 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:54.429970 | orchestrator | 2025-09-19 17:11:54 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:54.431183 | orchestrator | 2025-09-19 17:11:54 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:54.431238 | orchestrator | 2025-09-19 17:11:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:11:57.461832 | orchestrator | 2025-09-19 17:11:57 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:11:57.463445 | orchestrator | 2025-09-19 17:11:57 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:11:57.465489 | orchestrator | 2025-09-19 17:11:57 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:11:57.467022 | orchestrator | 2025-09-19 17:11:57 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:11:57.467317 | orchestrator | 2025-09-19 17:11:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:00.497685 | orchestrator | 2025-09-19 17:12:00 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:00.498143 | orchestrator | 2025-09-19 17:12:00 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:00.498819 | orchestrator | 2025-09-19 17:12:00 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:00.500616 | orchestrator | 2025-09-19 17:12:00 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:12:00.500641 | orchestrator | 2025-09-19 17:12:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:03.537197 | orchestrator | 2025-09-19 17:12:03 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:03.538670 | orchestrator | 2025-09-19 17:12:03 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:03.540116 | orchestrator | 2025-09-19 17:12:03 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:03.541028 | orchestrator | 2025-09-19 17:12:03 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state STARTED 2025-09-19 17:12:03.541063 | orchestrator | 2025-09-19 17:12:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:06.574745 | orchestrator | 2025-09-19 17:12:06 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:06.576258 | orchestrator | 2025-09-19 17:12:06 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:06.576309 | orchestrator | 2025-09-19 17:12:06 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:06.578517 | orchestrator | 2025-09-19 17:12:06.578556 | orchestrator | 2025-09-19 17:12:06 | INFO  | Task 2b4f1952-9a68-4723-85d9-90ffac95204a is in state SUCCESS 2025-09-19 17:12:06.580189 | orchestrator | 2025-09-19 17:12:06.580226 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:12:06.580240 | orchestrator | 2025-09-19 17:12:06.580252 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:12:06.580264 | orchestrator | Friday 19 September 2025 17:07:30 +0000 (0:00:00.421) 0:00:00.421 ****** 2025-09-19 17:12:06.580275 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:12:06.580287 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:12:06.580298 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:12:06.580309 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:12:06.581453 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:12:06.581493 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:12:06.581510 | orchestrator | 2025-09-19 17:12:06.581529 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:12:06.581547 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.882) 0:00:01.304 ****** 2025-09-19 17:12:06.581563 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-19 17:12:06.581581 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-19 17:12:06.581599 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-19 17:12:06.581616 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-19 17:12:06.581634 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-19 17:12:06.581652 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-19 17:12:06.581670 | orchestrator | 2025-09-19 17:12:06.581689 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-19 17:12:06.581770 | orchestrator | 2025-09-19 17:12:06.581785 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 17:12:06.581796 | orchestrator | Friday 19 September 2025 17:07:31 +0000 (0:00:00.761) 0:00:02.065 ****** 2025-09-19 17:12:06.581809 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:12:06.581821 | orchestrator | 2025-09-19 17:12:06.581832 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-19 17:12:06.581843 | orchestrator | Friday 19 September 2025 17:07:33 +0000 (0:00:01.067) 0:00:03.132 ****** 2025-09-19 17:12:06.581854 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:12:06.581864 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:12:06.581875 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:12:06.581886 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:12:06.581897 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:12:06.581908 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:12:06.581919 | orchestrator | 2025-09-19 17:12:06.581967 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-19 17:12:06.581978 | orchestrator | Friday 19 September 2025 17:07:34 +0000 (0:00:01.062) 0:00:04.195 ****** 2025-09-19 17:12:06.581989 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:12:06.582001 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:12:06.582012 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:12:06.582128 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:12:06.582142 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:12:06.582154 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:12:06.582166 | orchestrator | 2025-09-19 17:12:06.582179 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-19 17:12:06.582192 | orchestrator | Friday 19 September 2025 17:07:35 +0000 (0:00:00.979) 0:00:05.174 ****** 2025-09-19 17:12:06.582205 | orchestrator | ok: [testbed-node-0] => { 2025-09-19 17:12:06.582218 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582239 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582253 | orchestrator | } 2025-09-19 17:12:06.582266 | orchestrator | ok: [testbed-node-1] => { 2025-09-19 17:12:06.582279 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582291 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582304 | orchestrator | } 2025-09-19 17:12:06.582316 | orchestrator | ok: [testbed-node-2] => { 2025-09-19 17:12:06.582329 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582341 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582354 | orchestrator | } 2025-09-19 17:12:06.582366 | orchestrator | ok: [testbed-node-3] => { 2025-09-19 17:12:06.582379 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582390 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582401 | orchestrator | } 2025-09-19 17:12:06.582411 | orchestrator | ok: [testbed-node-4] => { 2025-09-19 17:12:06.582422 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582433 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582444 | orchestrator | } 2025-09-19 17:12:06.582455 | orchestrator | ok: [testbed-node-5] => { 2025-09-19 17:12:06.582465 | orchestrator |  "changed": false, 2025-09-19 17:12:06.582476 | orchestrator |  "msg": "All assertions passed" 2025-09-19 17:12:06.582487 | orchestrator | } 2025-09-19 17:12:06.582497 | orchestrator | 2025-09-19 17:12:06.582508 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-19 17:12:06.582519 | orchestrator | Friday 19 September 2025 17:07:35 +0000 (0:00:00.624) 0:00:05.798 ****** 2025-09-19 17:12:06.582530 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.582541 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.582552 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.582562 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.582573 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.582583 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.582606 | orchestrator | 2025-09-19 17:12:06.582617 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-19 17:12:06.582628 | orchestrator | Friday 19 September 2025 17:07:36 +0000 (0:00:00.514) 0:00:06.313 ****** 2025-09-19 17:12:06.582638 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-19 17:12:06.582649 | orchestrator | 2025-09-19 17:12:06.582660 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-19 17:12:06.582671 | orchestrator | Friday 19 September 2025 17:07:39 +0000 (0:00:03.069) 0:00:09.382 ****** 2025-09-19 17:12:06.582682 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-19 17:12:06.582694 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-19 17:12:06.582705 | orchestrator | 2025-09-19 17:12:06.582776 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-19 17:12:06.582789 | orchestrator | Friday 19 September 2025 17:07:46 +0000 (0:00:07.120) 0:00:16.502 ****** 2025-09-19 17:12:06.582800 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:12:06.582811 | orchestrator | 2025-09-19 17:12:06.582822 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-19 17:12:06.582832 | orchestrator | Friday 19 September 2025 17:07:49 +0000 (0:00:03.441) 0:00:19.944 ****** 2025-09-19 17:12:06.582843 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:12:06.582854 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-19 17:12:06.582864 | orchestrator | 2025-09-19 17:12:06.582875 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-19 17:12:06.582886 | orchestrator | Friday 19 September 2025 17:07:54 +0000 (0:00:04.418) 0:00:24.362 ****** 2025-09-19 17:12:06.582896 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:12:06.582907 | orchestrator | 2025-09-19 17:12:06.582917 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-19 17:12:06.582984 | orchestrator | Friday 19 September 2025 17:07:58 +0000 (0:00:03.947) 0:00:28.310 ****** 2025-09-19 17:12:06.582999 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-19 17:12:06.583011 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-19 17:12:06.583021 | orchestrator | 2025-09-19 17:12:06.583032 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 17:12:06.583043 | orchestrator | Friday 19 September 2025 17:08:06 +0000 (0:00:08.800) 0:00:37.111 ****** 2025-09-19 17:12:06.583054 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.583065 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.583076 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.583087 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.583098 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.583109 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.583119 | orchestrator | 2025-09-19 17:12:06.583130 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-19 17:12:06.583141 | orchestrator | Friday 19 September 2025 17:08:07 +0000 (0:00:00.743) 0:00:37.854 ****** 2025-09-19 17:12:06.583152 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.583163 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.583174 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.583185 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.583195 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.583206 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.583217 | orchestrator | 2025-09-19 17:12:06.583228 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-19 17:12:06.583239 | orchestrator | Friday 19 September 2025 17:08:10 +0000 (0:00:02.367) 0:00:40.222 ****** 2025-09-19 17:12:06.583250 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:12:06.583261 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:12:06.583281 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:12:06.583292 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:12:06.583303 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:12:06.583314 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:12:06.583324 | orchestrator | 2025-09-19 17:12:06.583335 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-19 17:12:06.583346 | orchestrator | Friday 19 September 2025 17:08:11 +0000 (0:00:01.265) 0:00:41.487 ****** 2025-09-19 17:12:06.583357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.583368 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.583385 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.583395 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.583404 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.583414 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.583423 | orchestrator | 2025-09-19 17:12:06.583433 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-19 17:12:06.583443 | orchestrator | Friday 19 September 2025 17:08:14 +0000 (0:00:03.041) 0:00:44.529 ****** 2025-09-19 17:12:06.583456 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583590 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583600 | orchestrator | 2025-09-19 17:12:06.583610 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-19 17:12:06.583620 | orchestrator | Friday 19 September 2025 17:08:17 +0000 (0:00:03.380) 0:00:47.909 ****** 2025-09-19 17:12:06.583630 | orchestrator | [WARNING]: Skipped 2025-09-19 17:12:06.583640 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-19 17:12:06.583649 | orchestrator | due to this access issue: 2025-09-19 17:12:06.583659 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-19 17:12:06.583669 | orchestrator | a directory 2025-09-19 17:12:06.583678 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:12:06.583688 | orchestrator | 2025-09-19 17:12:06.583698 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 17:12:06.583734 | orchestrator | Friday 19 September 2025 17:08:18 +0000 (0:00:00.739) 0:00:48.649 ****** 2025-09-19 17:12:06.583745 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:12:06.583757 | orchestrator | 2025-09-19 17:12:06.583766 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-19 17:12:06.583776 | orchestrator | Friday 19 September 2025 17:08:19 +0000 (0:00:01.106) 0:00:49.755 ****** 2025-09-19 17:12:06.583786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583803 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583818 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.583878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.583896 | orchestrator | 2025-09-19 17:12:06.583906 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-19 17:12:06.583916 | orchestrator | Friday 19 September 2025 17:08:23 +0000 (0:00:04.266) 0:00:54.022 ****** 2025-09-19 17:12:06.583941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.583952 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.583966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.583977 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.583988 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.583998 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.584037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584055 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.584065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584075 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.584085 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584094 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.584104 | orchestrator | 2025-09-19 17:12:06.584118 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-19 17:12:06.584128 | orchestrator | Friday 19 September 2025 17:08:27 +0000 (0:00:03.470) 0:00:57.492 ****** 2025-09-19 17:12:06.584138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584148 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.584164 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584184 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.584194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584204 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.584214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584223 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.584238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584248 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584268 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.584277 | orchestrator | 2025-09-19 17:12:06.584287 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-19 17:12:06.584296 | orchestrator | Friday 19 September 2025 17:08:30 +0000 (0:00:03.552) 0:01:01.045 ****** 2025-09-19 17:12:06.584306 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584315 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.584331 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.584340 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.584349 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.584359 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.584368 | orchestrator | 2025-09-19 17:12:06.584378 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-19 17:12:06.584394 | orchestrator | Friday 19 September 2025 17:08:33 +0000 (0:00:02.826) 0:01:03.872 ****** 2025-09-19 17:12:06.584404 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584414 | orchestrator | 2025-09-19 17:12:06.584424 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-19 17:12:06.584433 | orchestrator | Friday 19 September 2025 17:08:33 +0000 (0:00:00.138) 0:01:04.010 ****** 2025-09-19 17:12:06.584443 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584452 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.584462 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.584471 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.584480 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.584489 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.584499 | orchestrator | 2025-09-19 17:12:06.584508 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-19 17:12:06.584518 | orchestrator | Friday 19 September 2025 17:08:34 +0000 (0:00:00.820) 0:01:04.831 ****** 2025-09-19 17:12:06.584528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584538 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584563 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.584573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584590 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.584605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584615 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.584625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584635 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.584645 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.584655 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.584664 | orchestrator | 2025-09-19 17:12:06.584674 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-19 17:12:06.584684 | orchestrator | Friday 19 September 2025 17:08:37 +0000 (0:00:03.181) 0:01:08.012 ****** 2025-09-19 17:12:06.584698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584743 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584768 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584788 | orchestrator | 2025-09-19 17:12:06.584798 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-19 17:12:06.584807 | orchestrator | Friday 19 September 2025 17:08:42 +0000 (0:00:04.782) 0:01:12.795 ****** 2025-09-19 17:12:06.584818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584856 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.584886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.584896 | orchestrator | 2025-09-19 17:12:06.584906 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-19 17:12:06.584915 | orchestrator | Friday 19 September 2025 17:08:50 +0000 (0:00:08.078) 0:01:20.873 ****** 2025-09-19 17:12:06.584949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584959 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.584969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.584979 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.584993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.585009 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585019 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585029 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585075 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585085 | orchestrator | 2025-09-19 17:12:06.585094 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-19 17:12:06.585104 | orchestrator | Friday 19 September 2025 17:08:53 +0000 (0:00:03.194) 0:01:24.067 ****** 2025-09-19 17:12:06.585114 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585123 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585140 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585156 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:12:06.585174 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:12:06.585190 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:12:06.585206 | orchestrator | 2025-09-19 17:12:06.585223 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-19 17:12:06.585238 | orchestrator | Friday 19 September 2025 17:08:57 +0000 (0:00:03.404) 0:01:27.472 ****** 2025-09-19 17:12:06.585255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585281 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585323 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.585360 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.585409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.585436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.585455 | orchestrator | 2025-09-19 17:12:06.585473 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-19 17:12:06.585489 | orchestrator | Friday 19 September 2025 17:09:02 +0000 (0:00:04.707) 0:01:32.180 ****** 2025-09-19 17:12:06.585505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.585520 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585530 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.585540 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585549 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585559 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585568 | orchestrator | 2025-09-19 17:12:06.585578 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-19 17:12:06.585588 | orchestrator | Friday 19 September 2025 17:09:04 +0000 (0:00:02.523) 0:01:34.703 ****** 2025-09-19 17:12:06.585597 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.585607 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.585626 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585635 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585644 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585654 | orchestrator | 2025-09-19 17:12:06.585663 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-19 17:12:06.585673 | orchestrator | Friday 19 September 2025 17:09:06 +0000 (0:00:02.234) 0:01:36.937 ****** 2025-09-19 17:12:06.585682 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.585692 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585701 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.585711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585720 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585729 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585739 | orchestrator | 2025-09-19 17:12:06.585748 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-19 17:12:06.585758 | orchestrator | Friday 19 September 2025 17:09:08 +0000 (0:00:01.733) 0:01:38.670 ****** 2025-09-19 17:12:06.585767 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585777 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.585786 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.585795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.585805 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585814 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.585824 | orchestrator | 2025-09-19 17:12:06.585833 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-19 17:12:06.585843 | orchestrator | Friday 19 September 2025 17:09:10 +0000 (0:00:01.799) 0:01:40.470 ****** 2025-09-19 17:12:06.585878 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.585888 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.585898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.585907 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.585980 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586000 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586010 | orchestrator | 2025-09-19 17:12:06.586051 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-19 17:12:06.586062 | orchestrator | Friday 19 September 2025 17:09:12 +0000 (0:00:02.139) 0:01:42.610 ****** 2025-09-19 17:12:06.586071 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586081 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586090 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586099 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586109 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586118 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586127 | orchestrator | 2025-09-19 17:12:06.586137 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-19 17:12:06.586146 | orchestrator | Friday 19 September 2025 17:09:14 +0000 (0:00:02.460) 0:01:45.070 ****** 2025-09-19 17:12:06.586156 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586165 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586175 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586184 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586193 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586203 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586212 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586222 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586240 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586250 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-19 17:12:06.586259 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586269 | orchestrator | 2025-09-19 17:12:06.586278 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-19 17:12:06.586288 | orchestrator | Friday 19 September 2025 17:09:17 +0000 (0:00:02.061) 0:01:47.132 ****** 2025-09-19 17:12:06.586303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586340 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586368 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586388 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586407 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586432 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586442 | orchestrator | 2025-09-19 17:12:06.586451 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-19 17:12:06.586461 | orchestrator | Friday 19 September 2025 17:09:18 +0000 (0:00:01.738) 0:01:48.870 ****** 2025-09-19 17:12:06.586471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586486 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586511 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.586527 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586535 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586543 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586555 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586572 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.586588 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586596 | orchestrator | 2025-09-19 17:12:06.586604 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-19 17:12:06.586612 | orchestrator | Friday 19 September 2025 17:09:20 +0000 (0:00:01.960) 0:01:50.831 ****** 2025-09-19 17:12:06.586620 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586631 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586639 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586647 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586655 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586663 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586670 | orchestrator | 2025-09-19 17:12:06.586678 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-19 17:12:06.586686 | orchestrator | Friday 19 September 2025 17:09:23 +0000 (0:00:02.365) 0:01:53.196 ****** 2025-09-19 17:12:06.586694 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586702 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586710 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586718 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:12:06.586726 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:12:06.586733 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:12:06.586741 | orchestrator | 2025-09-19 17:12:06.586749 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-19 17:12:06.586757 | orchestrator | Friday 19 September 2025 17:09:26 +0000 (0:00:03.190) 0:01:56.386 ****** 2025-09-19 17:12:06.586765 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586773 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586780 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586788 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586796 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586804 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586811 | orchestrator | 2025-09-19 17:12:06.586819 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-19 17:12:06.586827 | orchestrator | Friday 19 September 2025 17:09:28 +0000 (0:00:02.487) 0:01:58.874 ****** 2025-09-19 17:12:06.586835 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586843 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586851 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586858 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586866 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586874 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586882 | orchestrator | 2025-09-19 17:12:06.586890 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-19 17:12:06.586902 | orchestrator | Friday 19 September 2025 17:09:30 +0000 (0:00:02.184) 0:02:01.059 ****** 2025-09-19 17:12:06.586909 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.586917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.586939 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.586947 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.586955 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.586963 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.586970 | orchestrator | 2025-09-19 17:12:06.586978 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-19 17:12:06.586986 | orchestrator | Friday 19 September 2025 17:09:34 +0000 (0:00:03.310) 0:02:04.369 ****** 2025-09-19 17:12:06.586994 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587002 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587010 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587018 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587025 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587033 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587041 | orchestrator | 2025-09-19 17:12:06.587049 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-19 17:12:06.587057 | orchestrator | Friday 19 September 2025 17:09:36 +0000 (0:00:02.492) 0:02:06.861 ****** 2025-09-19 17:12:06.587065 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587072 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587084 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587092 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587100 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587108 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587115 | orchestrator | 2025-09-19 17:12:06.587123 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-19 17:12:06.587131 | orchestrator | Friday 19 September 2025 17:09:38 +0000 (0:00:02.102) 0:02:08.964 ****** 2025-09-19 17:12:06.587139 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587147 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587154 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587162 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587170 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587177 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587185 | orchestrator | 2025-09-19 17:12:06.587193 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-19 17:12:06.587201 | orchestrator | Friday 19 September 2025 17:09:41 +0000 (0:00:03.019) 0:02:11.984 ****** 2025-09-19 17:12:06.587208 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587216 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587224 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587232 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587239 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587247 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587255 | orchestrator | 2025-09-19 17:12:06.587263 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-19 17:12:06.587271 | orchestrator | Friday 19 September 2025 17:09:45 +0000 (0:00:03.255) 0:02:15.239 ****** 2025-09-19 17:12:06.587278 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587286 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587294 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587302 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587310 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587318 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587325 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587338 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587350 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587358 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587366 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-19 17:12:06.587374 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587381 | orchestrator | 2025-09-19 17:12:06.587389 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-19 17:12:06.587397 | orchestrator | Friday 19 September 2025 17:09:47 +0000 (0:00:02.173) 0:02:17.412 ****** 2025-09-19 17:12:06.587406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.587414 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.587430 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-19 17:12:06.587451 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.587471 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.587493 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-19 17:12:06.587509 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587517 | orchestrator | 2025-09-19 17:12:06.587525 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-19 17:12:06.587533 | orchestrator | Friday 19 September 2025 17:09:49 +0000 (0:00:01.952) 0:02:19.365 ****** 2025-09-19 17:12:06.587544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.587554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.587571 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.587580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-19 17:12:06.587588 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.587600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-19 17:12:06.587608 | orchestrator | 2025-09-19 17:12:06.587616 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-19 17:12:06.587624 | orchestrator | Friday 19 September 2025 17:09:52 +0000 (0:00:03.143) 0:02:22.508 ****** 2025-09-19 17:12:06.587632 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:12:06.587640 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:12:06.587647 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:12:06.587655 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:12:06.587663 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:12:06.587670 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:12:06.587678 | orchestrator | 2025-09-19 17:12:06.587686 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-19 17:12:06.587698 | orchestrator | Friday 19 September 2025 17:09:52 +0000 (0:00:00.587) 0:02:23.095 ****** 2025-09-19 17:12:06.587706 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:12:06.587714 | orchestrator | 2025-09-19 17:12:06.587722 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-19 17:12:06.587730 | orchestrator | Friday 19 September 2025 17:09:55 +0000 (0:00:02.418) 0:02:25.514 ****** 2025-09-19 17:12:06.587737 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:12:06.587745 | orchestrator | 2025-09-19 17:12:06.587753 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-19 17:12:06.587761 | orchestrator | Friday 19 September 2025 17:09:57 +0000 (0:00:02.195) 0:02:27.709 ****** 2025-09-19 17:12:06.587768 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:12:06.587776 | orchestrator | 2025-09-19 17:12:06.587784 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587792 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:40.751) 0:03:08.461 ****** 2025-09-19 17:12:06.587800 | orchestrator | 2025-09-19 17:12:06.587807 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587815 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.063) 0:03:08.525 ****** 2025-09-19 17:12:06.587823 | orchestrator | 2025-09-19 17:12:06.587831 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587839 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.237) 0:03:08.762 ****** 2025-09-19 17:12:06.587846 | orchestrator | 2025-09-19 17:12:06.587854 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587862 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.069) 0:03:08.831 ****** 2025-09-19 17:12:06.587870 | orchestrator | 2025-09-19 17:12:06.587882 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587890 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.067) 0:03:08.899 ****** 2025-09-19 17:12:06.587898 | orchestrator | 2025-09-19 17:12:06.587906 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-19 17:12:06.587913 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.060) 0:03:08.959 ****** 2025-09-19 17:12:06.587935 | orchestrator | 2025-09-19 17:12:06.587944 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-19 17:12:06.587952 | orchestrator | Friday 19 September 2025 17:10:38 +0000 (0:00:00.087) 0:03:09.047 ****** 2025-09-19 17:12:06.587960 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:12:06.587968 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:12:06.587976 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:12:06.587983 | orchestrator | 2025-09-19 17:12:06.587991 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-19 17:12:06.587999 | orchestrator | Friday 19 September 2025 17:11:09 +0000 (0:00:30.169) 0:03:39.216 ****** 2025-09-19 17:12:06.588007 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:12:06.588015 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:12:06.588023 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:12:06.588031 | orchestrator | 2025-09-19 17:12:06.588038 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:12:06.588046 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 17:12:06.588055 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 17:12:06.588063 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-19 17:12:06.588071 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 17:12:06.588084 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 17:12:06.588092 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-19 17:12:06.588100 | orchestrator | 2025-09-19 17:12:06.588108 | orchestrator | 2025-09-19 17:12:06.588116 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:12:06.588124 | orchestrator | Friday 19 September 2025 17:12:05 +0000 (0:00:56.753) 0:04:35.970 ****** 2025-09-19 17:12:06.588132 | orchestrator | =============================================================================== 2025-09-19 17:12:06.588140 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 56.75s 2025-09-19 17:12:06.588147 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.75s 2025-09-19 17:12:06.588155 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.17s 2025-09-19 17:12:06.588167 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.80s 2025-09-19 17:12:06.588175 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.08s 2025-09-19 17:12:06.588183 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.12s 2025-09-19 17:12:06.588191 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.78s 2025-09-19 17:12:06.588198 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.71s 2025-09-19 17:12:06.588206 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.42s 2025-09-19 17:12:06.588214 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.27s 2025-09-19 17:12:06.588222 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.95s 2025-09-19 17:12:06.588230 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.55s 2025-09-19 17:12:06.588238 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.47s 2025-09-19 17:12:06.588246 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.44s 2025-09-19 17:12:06.588254 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.40s 2025-09-19 17:12:06.588261 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 3.38s 2025-09-19 17:12:06.588269 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 3.31s 2025-09-19 17:12:06.588277 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 3.26s 2025-09-19 17:12:06.588285 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.19s 2025-09-19 17:12:06.588293 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 3.19s 2025-09-19 17:12:06.588301 | orchestrator | 2025-09-19 17:12:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:09.624545 | orchestrator | 2025-09-19 17:12:09 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:09.625209 | orchestrator | 2025-09-19 17:12:09 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:09.627012 | orchestrator | 2025-09-19 17:12:09 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:09.627871 | orchestrator | 2025-09-19 17:12:09 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:09.627896 | orchestrator | 2025-09-19 17:12:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:12.691391 | orchestrator | 2025-09-19 17:12:12 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:12.691799 | orchestrator | 2025-09-19 17:12:12 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:12.692478 | orchestrator | 2025-09-19 17:12:12 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:12.693298 | orchestrator | 2025-09-19 17:12:12 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:12.693380 | orchestrator | 2025-09-19 17:12:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:15.758607 | orchestrator | 2025-09-19 17:12:15 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:15.758857 | orchestrator | 2025-09-19 17:12:15 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:15.759345 | orchestrator | 2025-09-19 17:12:15 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:15.760062 | orchestrator | 2025-09-19 17:12:15 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:15.760091 | orchestrator | 2025-09-19 17:12:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:18.790338 | orchestrator | 2025-09-19 17:12:18 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:18.791574 | orchestrator | 2025-09-19 17:12:18 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:18.791600 | orchestrator | 2025-09-19 17:12:18 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:18.791832 | orchestrator | 2025-09-19 17:12:18 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:18.791849 | orchestrator | 2025-09-19 17:12:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:21.812366 | orchestrator | 2025-09-19 17:12:21 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:21.812468 | orchestrator | 2025-09-19 17:12:21 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:21.813053 | orchestrator | 2025-09-19 17:12:21 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:21.813599 | orchestrator | 2025-09-19 17:12:21 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:21.813621 | orchestrator | 2025-09-19 17:12:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:24.850770 | orchestrator | 2025-09-19 17:12:24 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:24.852096 | orchestrator | 2025-09-19 17:12:24 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:24.852133 | orchestrator | 2025-09-19 17:12:24 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:24.852151 | orchestrator | 2025-09-19 17:12:24 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:24.852347 | orchestrator | 2025-09-19 17:12:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:27.900281 | orchestrator | 2025-09-19 17:12:27 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:27.900384 | orchestrator | 2025-09-19 17:12:27 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:27.900988 | orchestrator | 2025-09-19 17:12:27 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:27.902002 | orchestrator | 2025-09-19 17:12:27 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:27.902060 | orchestrator | 2025-09-19 17:12:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:30.933381 | orchestrator | 2025-09-19 17:12:30 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:30.933524 | orchestrator | 2025-09-19 17:12:30 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:30.934171 | orchestrator | 2025-09-19 17:12:30 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:30.934864 | orchestrator | 2025-09-19 17:12:30 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:30.934877 | orchestrator | 2025-09-19 17:12:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:33.958196 | orchestrator | 2025-09-19 17:12:33 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:33.958302 | orchestrator | 2025-09-19 17:12:33 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:33.958530 | orchestrator | 2025-09-19 17:12:33 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:33.959105 | orchestrator | 2025-09-19 17:12:33 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:33.959128 | orchestrator | 2025-09-19 17:12:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:36.983040 | orchestrator | 2025-09-19 17:12:36 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:36.983604 | orchestrator | 2025-09-19 17:12:36 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:36.984348 | orchestrator | 2025-09-19 17:12:36 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:36.985076 | orchestrator | 2025-09-19 17:12:36 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:36.985177 | orchestrator | 2025-09-19 17:12:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:40.017105 | orchestrator | 2025-09-19 17:12:40 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:40.019671 | orchestrator | 2025-09-19 17:12:40 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:40.020744 | orchestrator | 2025-09-19 17:12:40 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:40.021323 | orchestrator | 2025-09-19 17:12:40 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:40.021351 | orchestrator | 2025-09-19 17:12:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:43.056054 | orchestrator | 2025-09-19 17:12:43 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:43.056748 | orchestrator | 2025-09-19 17:12:43 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:43.057398 | orchestrator | 2025-09-19 17:12:43 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:43.057973 | orchestrator | 2025-09-19 17:12:43 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:43.058003 | orchestrator | 2025-09-19 17:12:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:46.084379 | orchestrator | 2025-09-19 17:12:46 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:46.084773 | orchestrator | 2025-09-19 17:12:46 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:46.085554 | orchestrator | 2025-09-19 17:12:46 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:46.086292 | orchestrator | 2025-09-19 17:12:46 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:46.086405 | orchestrator | 2025-09-19 17:12:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:49.112598 | orchestrator | 2025-09-19 17:12:49 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:49.112718 | orchestrator | 2025-09-19 17:12:49 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:49.113366 | orchestrator | 2025-09-19 17:12:49 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:49.114193 | orchestrator | 2025-09-19 17:12:49 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:49.114219 | orchestrator | 2025-09-19 17:12:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:52.140181 | orchestrator | 2025-09-19 17:12:52 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:52.140393 | orchestrator | 2025-09-19 17:12:52 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:52.141021 | orchestrator | 2025-09-19 17:12:52 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:52.142106 | orchestrator | 2025-09-19 17:12:52 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:52.142133 | orchestrator | 2025-09-19 17:12:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:55.185976 | orchestrator | 2025-09-19 17:12:55 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:55.187094 | orchestrator | 2025-09-19 17:12:55 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:55.188727 | orchestrator | 2025-09-19 17:12:55 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:55.189468 | orchestrator | 2025-09-19 17:12:55 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:55.189500 | orchestrator | 2025-09-19 17:12:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:12:58.286651 | orchestrator | 2025-09-19 17:12:58 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:12:58.286787 | orchestrator | 2025-09-19 17:12:58 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:12:58.286802 | orchestrator | 2025-09-19 17:12:58 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:12:58.286814 | orchestrator | 2025-09-19 17:12:58 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:12:58.286825 | orchestrator | 2025-09-19 17:12:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:01.273076 | orchestrator | 2025-09-19 17:13:01 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:01.277017 | orchestrator | 2025-09-19 17:13:01 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:01.280259 | orchestrator | 2025-09-19 17:13:01 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:01.281967 | orchestrator | 2025-09-19 17:13:01 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:01.282229 | orchestrator | 2025-09-19 17:13:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:04.331804 | orchestrator | 2025-09-19 17:13:04 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:04.331973 | orchestrator | 2025-09-19 17:13:04 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:04.332177 | orchestrator | 2025-09-19 17:13:04 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:04.332983 | orchestrator | 2025-09-19 17:13:04 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:04.333027 | orchestrator | 2025-09-19 17:13:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:07.374606 | orchestrator | 2025-09-19 17:13:07 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:07.376616 | orchestrator | 2025-09-19 17:13:07 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:07.378958 | orchestrator | 2025-09-19 17:13:07 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:07.380737 | orchestrator | 2025-09-19 17:13:07 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:07.381274 | orchestrator | 2025-09-19 17:13:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:10.436506 | orchestrator | 2025-09-19 17:13:10 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:10.438121 | orchestrator | 2025-09-19 17:13:10 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:10.438795 | orchestrator | 2025-09-19 17:13:10 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:10.440366 | orchestrator | 2025-09-19 17:13:10 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:10.440712 | orchestrator | 2025-09-19 17:13:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:13.489529 | orchestrator | 2025-09-19 17:13:13 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:13.492747 | orchestrator | 2025-09-19 17:13:13 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:13.495669 | orchestrator | 2025-09-19 17:13:13 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:13.497216 | orchestrator | 2025-09-19 17:13:13 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:13.497357 | orchestrator | 2025-09-19 17:13:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:16.543675 | orchestrator | 2025-09-19 17:13:16 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:16.544096 | orchestrator | 2025-09-19 17:13:16 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:16.546258 | orchestrator | 2025-09-19 17:13:16 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:16.546283 | orchestrator | 2025-09-19 17:13:16 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:16.546294 | orchestrator | 2025-09-19 17:13:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:19.592593 | orchestrator | 2025-09-19 17:13:19 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:19.594607 | orchestrator | 2025-09-19 17:13:19 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:19.597570 | orchestrator | 2025-09-19 17:13:19 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:19.599147 | orchestrator | 2025-09-19 17:13:19 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:19.599173 | orchestrator | 2025-09-19 17:13:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:22.637786 | orchestrator | 2025-09-19 17:13:22 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:22.639055 | orchestrator | 2025-09-19 17:13:22 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:22.640026 | orchestrator | 2025-09-19 17:13:22 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state STARTED 2025-09-19 17:13:22.641176 | orchestrator | 2025-09-19 17:13:22 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:22.641372 | orchestrator | 2025-09-19 17:13:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:25.684870 | orchestrator | 2025-09-19 17:13:25 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:25.687155 | orchestrator | 2025-09-19 17:13:25 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:25.689463 | orchestrator | 2025-09-19 17:13:25 | INFO  | Task 68f2ccc9-e7de-4352-9f2b-549b0365f4bd is in state SUCCESS 2025-09-19 17:13:25.691550 | orchestrator | 2025-09-19 17:13:25.691586 | orchestrator | 2025-09-19 17:13:25.691599 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:13:25.691611 | orchestrator | 2025-09-19 17:13:25.691638 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:13:25.691651 | orchestrator | Friday 19 September 2025 17:10:28 +0000 (0:00:00.257) 0:00:00.257 ****** 2025-09-19 17:13:25.691663 | orchestrator | ok: [testbed-manager] 2025-09-19 17:13:25.691675 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:13:25.692358 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:13:25.692379 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:13:25.692390 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:13:25.692401 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:13:25.692411 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:13:25.693717 | orchestrator | 2025-09-19 17:13:25.693734 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:13:25.693746 | orchestrator | Friday 19 September 2025 17:10:29 +0000 (0:00:00.732) 0:00:00.989 ****** 2025-09-19 17:13:25.693758 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693770 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693781 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693792 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693803 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693814 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693825 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-19 17:13:25.693836 | orchestrator | 2025-09-19 17:13:25.693847 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-19 17:13:25.693858 | orchestrator | 2025-09-19 17:13:25.693869 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 17:13:25.693879 | orchestrator | Friday 19 September 2025 17:10:29 +0000 (0:00:00.613) 0:00:01.602 ****** 2025-09-19 17:13:25.693892 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:13:25.693904 | orchestrator | 2025-09-19 17:13:25.693916 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-19 17:13:25.693927 | orchestrator | Friday 19 September 2025 17:10:31 +0000 (0:00:01.294) 0:00:02.897 ****** 2025-09-19 17:13:25.693964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694001 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:13:25.694050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694065 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694117 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694220 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694258 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694301 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:13:25.694328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694400 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694472 | orchestrator | 2025-09-19 17:13:25.694484 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-19 17:13:25.694495 | orchestrator | Friday 19 September 2025 17:10:34 +0000 (0:00:03.265) 0:00:06.163 ****** 2025-09-19 17:13:25.694516 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:13:25.694528 | orchestrator | 2025-09-19 17:13:25.694539 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-19 17:13:25.694550 | orchestrator | Friday 19 September 2025 17:10:35 +0000 (0:00:01.290) 0:00:07.453 ****** 2025-09-19 17:13:25.694561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694573 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:13:25.694585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694646 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694664 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.694676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694710 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694744 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694808 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:13:25.694826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694843 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.694907 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.694918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.695019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.695040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.695051 | orchestrator | 2025-09-19 17:13:25.695062 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-19 17:13:25.695074 | orchestrator | Friday 19 September 2025 17:10:42 +0000 (0:00:06.316) 0:00:13.770 ****** 2025-09-19 17:13:25.695086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 17:13:25.695173 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695185 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695196 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 17:13:25.695209 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695294 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.695306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695317 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.695328 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.695339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695391 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.695414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695448 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.695460 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695471 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695482 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695493 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.695504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695521 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695561 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.695572 | orchestrator | 2025-09-19 17:13:25.695583 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-19 17:13:25.695594 | orchestrator | Friday 19 September 2025 17:10:44 +0000 (0:00:02.502) 0:00:16.273 ****** 2025-09-19 17:13:25.695606 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-19 17:13:25.695617 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695629 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695640 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-19 17:13:25.695658 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695738 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.695749 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.695760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695835 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.695846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-19 17:13:25.695908 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.695925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.695972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.695994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.696011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.696023 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.696034 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.696045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-19 17:13:25.696056 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.696078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-19 17:13:25.696090 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.696101 | orchestrator | 2025-09-19 17:13:25.696112 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-19 17:13:25.696123 | orchestrator | Friday 19 September 2025 17:10:46 +0000 (0:00:02.320) 0:00:18.593 ****** 2025-09-19 17:13:25.696291 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:13:25.696309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696341 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696353 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696375 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696392 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.696404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696466 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696559 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:13:25.696573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696637 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696653 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.696741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.696775 | orchestrator | 2025-09-19 17:13:25.696786 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-19 17:13:25.696797 | orchestrator | Friday 19 September 2025 17:10:53 +0000 (0:00:06.663) 0:00:25.256 ****** 2025-09-19 17:13:25.696808 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 17:13:25.696819 | orchestrator | 2025-09-19 17:13:25.696830 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-19 17:13:25.696840 | orchestrator | Friday 19 September 2025 17:10:54 +0000 (0:00:00.908) 0:00:26.165 ****** 2025-09-19 17:13:25.696856 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696895 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696927 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696970 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696982 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.696998 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697047 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697061 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697072 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697083 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697094 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.697105 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1104950, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.921237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697121 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697173 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697198 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697209 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697220 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697231 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697247 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697266 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697307 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697321 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697332 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697354 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697370 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697388 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697427 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697440 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697452 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697463 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697474 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697503 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697556 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697569 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697580 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1104966, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.926848, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.697591 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697602 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697626 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697637 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697678 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697703 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697714 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697725 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697752 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697763 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697803 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697816 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697828 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697839 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697857 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697873 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697885 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697925 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697957 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1104942, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9206154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.697969 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697980 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.697998 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698041 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698056 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698102 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698115 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698126 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698137 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698157 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698173 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698184 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698224 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698237 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698248 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698260 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698277 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698293 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698305 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698322 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698333 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698344 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698361 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698383 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.698399 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1104960, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234812, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698410 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698427 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698438 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698450 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698469 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698480 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698495 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698507 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698525 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698536 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698547 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698568 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698579 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698598 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698609 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698627 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698639 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698656 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698668 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1104939, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9191153, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698679 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698690 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.698705 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698728 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.698744 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698756 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698774 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698785 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.698796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698807 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698818 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.698834 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-19 17:13:25.698845 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.698856 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1104952, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9215941, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698873 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1104959, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9234078, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698891 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1104954, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9222465, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698902 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1104949, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.920721, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698913 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104965, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9265795, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698924 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104934, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9181168, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.698999 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1104976, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699012 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1104963, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9254813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699030 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1104941, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.919611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699050 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1104936, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.918481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699061 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1104957, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.923011, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699072 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1104955, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9225712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699083 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1104975, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9294813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-19 17:13:25.699095 | orchestrator | 2025-09-19 17:13:25.699106 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-19 17:13:25.699117 | orchestrator | Friday 19 September 2025 17:11:20 +0000 (0:00:25.862) 0:00:52.027 ****** 2025-09-19 17:13:25.699128 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 17:13:25.699139 | orchestrator | 2025-09-19 17:13:25.699149 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-19 17:13:25.699160 | orchestrator | Friday 19 September 2025 17:11:21 +0000 (0:00:00.917) 0:00:52.945 ****** 2025-09-19 17:13:25.699171 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699187 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699199 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699209 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699220 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699231 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 17:13:25.699242 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699252 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699263 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699285 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699296 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699306 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699317 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699327 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699336 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699346 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699355 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699370 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699379 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699389 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699399 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699408 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699427 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699446 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699456 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699465 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699475 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699485 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699494 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699504 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.699513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699523 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-19 17:13:25.699532 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-19 17:13:25.699542 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-19 17:13:25.699551 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:13:25.699561 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-19 17:13:25.699570 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 17:13:25.699580 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-19 17:13:25.699590 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 17:13:25.699599 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 17:13:25.699609 | orchestrator | 2025-09-19 17:13:25.699618 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-19 17:13:25.699628 | orchestrator | Friday 19 September 2025 17:11:23 +0000 (0:00:01.812) 0:00:54.757 ****** 2025-09-19 17:13:25.699638 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699648 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.699658 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699668 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.699677 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699687 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.699697 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699706 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.699716 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699731 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.699741 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-19 17:13:25.699750 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.699760 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-19 17:13:25.699769 | orchestrator | 2025-09-19 17:13:25.699779 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-19 17:13:25.699789 | orchestrator | Friday 19 September 2025 17:11:38 +0000 (0:00:15.453) 0:01:10.210 ****** 2025-09-19 17:13:25.699798 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699808 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.699818 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699832 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.699842 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699851 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.699861 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699871 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.699880 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699890 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.699900 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-19 17:13:25.699909 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.699919 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-19 17:13:25.699928 | orchestrator | 2025-09-19 17:13:25.699956 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-19 17:13:25.699966 | orchestrator | Friday 19 September 2025 17:11:41 +0000 (0:00:03.188) 0:01:13.399 ****** 2025-09-19 17:13:25.699976 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.699991 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.700001 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.700010 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700020 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700030 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700039 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.700049 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700058 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.700068 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700078 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-19 17:13:25.700088 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700097 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-19 17:13:25.700107 | orchestrator | 2025-09-19 17:13:25.700117 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-19 17:13:25.700133 | orchestrator | Friday 19 September 2025 17:11:43 +0000 (0:00:02.105) 0:01:15.505 ****** 2025-09-19 17:13:25.700143 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 17:13:25.700152 | orchestrator | 2025-09-19 17:13:25.700162 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-19 17:13:25.700172 | orchestrator | Friday 19 September 2025 17:11:44 +0000 (0:00:00.744) 0:01:16.249 ****** 2025-09-19 17:13:25.700181 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.700191 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700200 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700210 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700219 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700229 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700238 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700248 | orchestrator | 2025-09-19 17:13:25.700258 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-19 17:13:25.700267 | orchestrator | Friday 19 September 2025 17:11:45 +0000 (0:00:00.651) 0:01:16.900 ****** 2025-09-19 17:13:25.700277 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.700286 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700296 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700305 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700315 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.700325 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.700334 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.700344 | orchestrator | 2025-09-19 17:13:25.700353 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-19 17:13:25.700363 | orchestrator | Friday 19 September 2025 17:11:47 +0000 (0:00:02.617) 0:01:19.518 ****** 2025-09-19 17:13:25.700373 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700382 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700392 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700401 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700411 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700421 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.700430 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700440 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700449 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700459 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700468 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700482 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700492 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-19 17:13:25.700502 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700511 | orchestrator | 2025-09-19 17:13:25.700521 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-19 17:13:25.700531 | orchestrator | Friday 19 September 2025 17:11:50 +0000 (0:00:02.438) 0:01:21.957 ****** 2025-09-19 17:13:25.700541 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700550 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700560 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700570 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700579 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700589 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700604 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700614 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700627 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700637 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700647 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-19 17:13:25.700657 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700666 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-19 17:13:25.700676 | orchestrator | 2025-09-19 17:13:25.700686 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-19 17:13:25.700696 | orchestrator | Friday 19 September 2025 17:11:51 +0000 (0:00:01.271) 0:01:23.229 ****** 2025-09-19 17:13:25.700705 | orchestrator | [WARNING]: Skipped 2025-09-19 17:13:25.700715 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-19 17:13:25.700724 | orchestrator | due to this access issue: 2025-09-19 17:13:25.700734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-19 17:13:25.700744 | orchestrator | not a directory 2025-09-19 17:13:25.700754 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-19 17:13:25.700763 | orchestrator | 2025-09-19 17:13:25.700773 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-19 17:13:25.700782 | orchestrator | Friday 19 September 2025 17:11:52 +0000 (0:00:00.982) 0:01:24.211 ****** 2025-09-19 17:13:25.700792 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.700802 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700811 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700821 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700830 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700840 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700849 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700859 | orchestrator | 2025-09-19 17:13:25.700869 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-19 17:13:25.700879 | orchestrator | Friday 19 September 2025 17:11:53 +0000 (0:00:00.901) 0:01:25.112 ****** 2025-09-19 17:13:25.700888 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.700898 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:13:25.700907 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:13:25.700917 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:13:25.700926 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:13:25.700951 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:13:25.700961 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:13:25.700971 | orchestrator | 2025-09-19 17:13:25.700981 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-19 17:13:25.700990 | orchestrator | Friday 19 September 2025 17:11:54 +0000 (0:00:00.803) 0:01:25.916 ****** 2025-09-19 17:13:25.701001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701036 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-19 17:13:25.701052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701063 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701073 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-19 17:13:25.701104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701150 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701161 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701171 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701221 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701256 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-19 17:13:25.701268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-19 17:13:25.701318 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-19 17:13:25.701362 | orchestrator | 2025-09-19 17:13:25.701372 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-19 17:13:25.701382 | orchestrator | Friday 19 September 2025 17:11:58 +0000 (0:00:04.422) 0:01:30.339 ****** 2025-09-19 17:13:25.701392 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-19 17:13:25.701402 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:13:25.701411 | orchestrator | 2025-09-19 17:13:25.701421 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701431 | orchestrator | Friday 19 September 2025 17:11:59 +0000 (0:00:01.025) 0:01:31.364 ****** 2025-09-19 17:13:25.701441 | orchestrator | 2025-09-19 17:13:25.701459 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701469 | orchestrator | Friday 19 September 2025 17:11:59 +0000 (0:00:00.064) 0:01:31.429 ****** 2025-09-19 17:13:25.701478 | orchestrator | 2025-09-19 17:13:25.701488 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701498 | orchestrator | Friday 19 September 2025 17:11:59 +0000 (0:00:00.060) 0:01:31.490 ****** 2025-09-19 17:13:25.701508 | orchestrator | 2025-09-19 17:13:25.701517 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701527 | orchestrator | Friday 19 September 2025 17:11:59 +0000 (0:00:00.061) 0:01:31.552 ****** 2025-09-19 17:13:25.701536 | orchestrator | 2025-09-19 17:13:25.701546 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701556 | orchestrator | Friday 19 September 2025 17:12:00 +0000 (0:00:00.179) 0:01:31.731 ****** 2025-09-19 17:13:25.701565 | orchestrator | 2025-09-19 17:13:25.701575 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701584 | orchestrator | Friday 19 September 2025 17:12:00 +0000 (0:00:00.059) 0:01:31.791 ****** 2025-09-19 17:13:25.701594 | orchestrator | 2025-09-19 17:13:25.701604 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-19 17:13:25.701613 | orchestrator | Friday 19 September 2025 17:12:00 +0000 (0:00:00.059) 0:01:31.850 ****** 2025-09-19 17:13:25.701623 | orchestrator | 2025-09-19 17:13:25.701632 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-19 17:13:25.701642 | orchestrator | Friday 19 September 2025 17:12:00 +0000 (0:00:00.082) 0:01:31.933 ****** 2025-09-19 17:13:25.701651 | orchestrator | changed: [testbed-manager] 2025-09-19 17:13:25.701661 | orchestrator | 2025-09-19 17:13:25.701670 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-19 17:13:25.701680 | orchestrator | Friday 19 September 2025 17:12:19 +0000 (0:00:18.970) 0:01:50.903 ****** 2025-09-19 17:13:25.701690 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.701699 | orchestrator | changed: [testbed-manager] 2025-09-19 17:13:25.701709 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.701722 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.701732 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:13:25.701742 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:13:25.701752 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:13:25.701761 | orchestrator | 2025-09-19 17:13:25.701771 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-19 17:13:25.701780 | orchestrator | Friday 19 September 2025 17:12:29 +0000 (0:00:10.186) 0:02:01.090 ****** 2025-09-19 17:13:25.701790 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.701800 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.701809 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.701819 | orchestrator | 2025-09-19 17:13:25.701828 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-19 17:13:25.701838 | orchestrator | Friday 19 September 2025 17:12:35 +0000 (0:00:05.765) 0:02:06.855 ****** 2025-09-19 17:13:25.701848 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.701857 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.701867 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.701876 | orchestrator | 2025-09-19 17:13:25.701886 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-19 17:13:25.701896 | orchestrator | Friday 19 September 2025 17:12:40 +0000 (0:00:04.864) 0:02:11.720 ****** 2025-09-19 17:13:25.701905 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.701915 | orchestrator | changed: [testbed-manager] 2025-09-19 17:13:25.701925 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.701950 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.701960 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:13:25.701974 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:13:25.701984 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:13:25.702001 | orchestrator | 2025-09-19 17:13:25.702011 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-19 17:13:25.702047 | orchestrator | Friday 19 September 2025 17:12:56 +0000 (0:00:16.060) 0:02:27.780 ****** 2025-09-19 17:13:25.702057 | orchestrator | changed: [testbed-manager] 2025-09-19 17:13:25.702066 | orchestrator | 2025-09-19 17:13:25.702076 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-19 17:13:25.702086 | orchestrator | Friday 19 September 2025 17:13:03 +0000 (0:00:07.395) 0:02:35.175 ****** 2025-09-19 17:13:25.702096 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:13:25.702105 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:13:25.702115 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:13:25.702124 | orchestrator | 2025-09-19 17:13:25.702134 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-19 17:13:25.702144 | orchestrator | Friday 19 September 2025 17:13:09 +0000 (0:00:06.292) 0:02:41.468 ****** 2025-09-19 17:13:25.702154 | orchestrator | changed: [testbed-manager] 2025-09-19 17:13:25.702163 | orchestrator | 2025-09-19 17:13:25.702173 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-19 17:13:25.702182 | orchestrator | Friday 19 September 2025 17:13:14 +0000 (0:00:04.911) 0:02:46.380 ****** 2025-09-19 17:13:25.702192 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:13:25.702201 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:13:25.702211 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:13:25.702220 | orchestrator | 2025-09-19 17:13:25.702230 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:13:25.702240 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 17:13:25.702250 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:13:25.702260 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:13:25.702270 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:13:25.702280 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 17:13:25.702289 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 17:13:25.702299 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 17:13:25.702308 | orchestrator | 2025-09-19 17:13:25.702318 | orchestrator | 2025-09-19 17:13:25.702328 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:13:25.702337 | orchestrator | Friday 19 September 2025 17:13:25 +0000 (0:00:10.493) 0:02:56.874 ****** 2025-09-19 17:13:25.702347 | orchestrator | =============================================================================== 2025-09-19 17:13:25.702357 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.86s 2025-09-19 17:13:25.702366 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.97s 2025-09-19 17:13:25.702376 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.06s 2025-09-19 17:13:25.702385 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 15.45s 2025-09-19 17:13:25.702395 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.49s 2025-09-19 17:13:25.702405 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 10.19s 2025-09-19 17:13:25.702414 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.40s 2025-09-19 17:13:25.702435 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.66s 2025-09-19 17:13:25.702445 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.32s 2025-09-19 17:13:25.702455 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.29s 2025-09-19 17:13:25.702464 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.77s 2025-09-19 17:13:25.702474 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.91s 2025-09-19 17:13:25.702483 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 4.86s 2025-09-19 17:13:25.702493 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.42s 2025-09-19 17:13:25.702502 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.27s 2025-09-19 17:13:25.702512 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.19s 2025-09-19 17:13:25.702521 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.62s 2025-09-19 17:13:25.702531 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.50s 2025-09-19 17:13:25.702540 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.44s 2025-09-19 17:13:25.702550 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.32s 2025-09-19 17:13:25.702564 | orchestrator | 2025-09-19 17:13:25 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:25.702574 | orchestrator | 2025-09-19 17:13:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:28.753103 | orchestrator | 2025-09-19 17:13:28 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:28.754075 | orchestrator | 2025-09-19 17:13:28 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:28.756751 | orchestrator | 2025-09-19 17:13:28 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:28.758472 | orchestrator | 2025-09-19 17:13:28 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:28.758518 | orchestrator | 2025-09-19 17:13:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:31.798727 | orchestrator | 2025-09-19 17:13:31 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:31.801288 | orchestrator | 2025-09-19 17:13:31 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:31.802809 | orchestrator | 2025-09-19 17:13:31 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:31.804586 | orchestrator | 2025-09-19 17:13:31 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:31.804691 | orchestrator | 2025-09-19 17:13:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:34.849713 | orchestrator | 2025-09-19 17:13:34 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:34.849826 | orchestrator | 2025-09-19 17:13:34 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:34.850686 | orchestrator | 2025-09-19 17:13:34 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:34.853261 | orchestrator | 2025-09-19 17:13:34 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:34.853287 | orchestrator | 2025-09-19 17:13:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:37.897126 | orchestrator | 2025-09-19 17:13:37 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:37.899493 | orchestrator | 2025-09-19 17:13:37 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:37.902310 | orchestrator | 2025-09-19 17:13:37 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:37.905612 | orchestrator | 2025-09-19 17:13:37 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:37.905635 | orchestrator | 2025-09-19 17:13:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:40.947713 | orchestrator | 2025-09-19 17:13:40 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:40.949244 | orchestrator | 2025-09-19 17:13:40 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:40.949277 | orchestrator | 2025-09-19 17:13:40 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:40.949289 | orchestrator | 2025-09-19 17:13:40 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:40.949322 | orchestrator | 2025-09-19 17:13:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:43.995240 | orchestrator | 2025-09-19 17:13:43 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:43.997258 | orchestrator | 2025-09-19 17:13:43 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:43.999603 | orchestrator | 2025-09-19 17:13:43 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:44.001307 | orchestrator | 2025-09-19 17:13:43 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:44.001359 | orchestrator | 2025-09-19 17:13:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:47.065861 | orchestrator | 2025-09-19 17:13:47 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:47.067174 | orchestrator | 2025-09-19 17:13:47 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:47.069333 | orchestrator | 2025-09-19 17:13:47 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:47.071201 | orchestrator | 2025-09-19 17:13:47 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:47.071585 | orchestrator | 2025-09-19 17:13:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:50.104291 | orchestrator | 2025-09-19 17:13:50 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:50.104420 | orchestrator | 2025-09-19 17:13:50 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:50.104844 | orchestrator | 2025-09-19 17:13:50 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:50.107425 | orchestrator | 2025-09-19 17:13:50 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:50.107482 | orchestrator | 2025-09-19 17:13:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:53.134229 | orchestrator | 2025-09-19 17:13:53 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:53.134523 | orchestrator | 2025-09-19 17:13:53 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:53.135147 | orchestrator | 2025-09-19 17:13:53 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:53.136545 | orchestrator | 2025-09-19 17:13:53 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:53.136635 | orchestrator | 2025-09-19 17:13:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:56.173060 | orchestrator | 2025-09-19 17:13:56 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:56.174505 | orchestrator | 2025-09-19 17:13:56 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:56.176199 | orchestrator | 2025-09-19 17:13:56 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:56.177780 | orchestrator | 2025-09-19 17:13:56 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:56.177803 | orchestrator | 2025-09-19 17:13:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:13:59.220531 | orchestrator | 2025-09-19 17:13:59 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:13:59.232598 | orchestrator | 2025-09-19 17:13:59 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:13:59.235680 | orchestrator | 2025-09-19 17:13:59 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:13:59.244397 | orchestrator | 2025-09-19 17:13:59 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:13:59.244472 | orchestrator | 2025-09-19 17:13:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:02.288225 | orchestrator | 2025-09-19 17:14:02 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:14:02.288551 | orchestrator | 2025-09-19 17:14:02 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:02.290869 | orchestrator | 2025-09-19 17:14:02 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:02.292249 | orchestrator | 2025-09-19 17:14:02 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:02.292276 | orchestrator | 2025-09-19 17:14:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:05.336540 | orchestrator | 2025-09-19 17:14:05 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:14:05.337172 | orchestrator | 2025-09-19 17:14:05 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:05.339276 | orchestrator | 2025-09-19 17:14:05 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:05.341069 | orchestrator | 2025-09-19 17:14:05 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:05.341203 | orchestrator | 2025-09-19 17:14:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:08.385755 | orchestrator | 2025-09-19 17:14:08 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state STARTED 2025-09-19 17:14:08.387541 | orchestrator | 2025-09-19 17:14:08 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:08.389611 | orchestrator | 2025-09-19 17:14:08 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:08.391317 | orchestrator | 2025-09-19 17:14:08 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:08.392190 | orchestrator | 2025-09-19 17:14:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:11.438365 | orchestrator | 2025-09-19 17:14:11 | INFO  | Task e2432f70-0529-47d0-9d51-e80391228e86 is in state SUCCESS 2025-09-19 17:14:11.440706 | orchestrator | 2025-09-19 17:14:11.440903 | orchestrator | 2025-09-19 17:14:11.440921 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:14:11.440934 | orchestrator | 2025-09-19 17:14:11.441022 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:14:11.441061 | orchestrator | Friday 19 September 2025 17:11:18 +0000 (0:00:00.196) 0:00:00.196 ****** 2025-09-19 17:14:11.441072 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:14:11.441085 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:14:11.441095 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:14:11.441106 | orchestrator | 2025-09-19 17:14:11.441117 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:14:11.441128 | orchestrator | Friday 19 September 2025 17:11:18 +0000 (0:00:00.221) 0:00:00.418 ****** 2025-09-19 17:14:11.441138 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-19 17:14:11.441150 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-19 17:14:11.441161 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-19 17:14:11.441171 | orchestrator | 2025-09-19 17:14:11.441182 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-19 17:14:11.441193 | orchestrator | 2025-09-19 17:14:11.441203 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 17:14:11.441214 | orchestrator | Friday 19 September 2025 17:11:19 +0000 (0:00:00.377) 0:00:00.795 ****** 2025-09-19 17:14:11.441225 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:14:11.441236 | orchestrator | 2025-09-19 17:14:11.441247 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-19 17:14:11.441257 | orchestrator | Friday 19 September 2025 17:11:19 +0000 (0:00:00.830) 0:00:01.625 ****** 2025-09-19 17:14:11.441268 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-19 17:14:11.441279 | orchestrator | 2025-09-19 17:14:11.441290 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-19 17:14:11.441301 | orchestrator | Friday 19 September 2025 17:11:23 +0000 (0:00:03.724) 0:00:05.350 ****** 2025-09-19 17:14:11.441312 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-19 17:14:11.441322 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-19 17:14:11.441333 | orchestrator | 2025-09-19 17:14:11.441344 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-19 17:14:11.441355 | orchestrator | Friday 19 September 2025 17:11:30 +0000 (0:00:07.191) 0:00:12.542 ****** 2025-09-19 17:14:11.441366 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:14:11.441377 | orchestrator | 2025-09-19 17:14:11.441388 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-19 17:14:11.441400 | orchestrator | Friday 19 September 2025 17:11:34 +0000 (0:00:03.496) 0:00:16.038 ****** 2025-09-19 17:14:11.441411 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:14:11.441422 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-19 17:14:11.441432 | orchestrator | 2025-09-19 17:14:11.441443 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-19 17:14:11.441454 | orchestrator | Friday 19 September 2025 17:11:38 +0000 (0:00:04.198) 0:00:20.237 ****** 2025-09-19 17:14:11.441464 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:14:11.441475 | orchestrator | 2025-09-19 17:14:11.441486 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-19 17:14:11.441497 | orchestrator | Friday 19 September 2025 17:11:42 +0000 (0:00:03.674) 0:00:23.912 ****** 2025-09-19 17:14:11.441508 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-19 17:14:11.441520 | orchestrator | 2025-09-19 17:14:11.441532 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-19 17:14:11.441544 | orchestrator | Friday 19 September 2025 17:11:46 +0000 (0:00:04.259) 0:00:28.171 ****** 2025-09-19 17:14:11.441592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.441622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.441643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.441664 | orchestrator | 2025-09-19 17:14:11.441676 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 17:14:11.441689 | orchestrator | Friday 19 September 2025 17:11:50 +0000 (0:00:04.305) 0:00:32.477 ****** 2025-09-19 17:14:11.441701 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:14:11.441714 | orchestrator | 2025-09-19 17:14:11.441735 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-19 17:14:11.441748 | orchestrator | Friday 19 September 2025 17:11:51 +0000 (0:00:00.806) 0:00:33.283 ****** 2025-09-19 17:14:11.441760 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:14:11.441772 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.441784 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:14:11.441796 | orchestrator | 2025-09-19 17:14:11.441808 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-19 17:14:11.441821 | orchestrator | Friday 19 September 2025 17:11:54 +0000 (0:00:03.171) 0:00:36.455 ****** 2025-09-19 17:14:11.441833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441846 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441859 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441870 | orchestrator | 2025-09-19 17:14:11.441881 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-19 17:14:11.441892 | orchestrator | Friday 19 September 2025 17:11:56 +0000 (0:00:01.782) 0:00:38.237 ****** 2025-09-19 17:14:11.441903 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441914 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441925 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:11.441936 | orchestrator | 2025-09-19 17:14:11.441966 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-19 17:14:11.441977 | orchestrator | Friday 19 September 2025 17:11:57 +0000 (0:00:01.209) 0:00:39.447 ****** 2025-09-19 17:14:11.441988 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:14:11.441998 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:14:11.442009 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:14:11.442061 | orchestrator | 2025-09-19 17:14:11.442072 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-19 17:14:11.442083 | orchestrator | Friday 19 September 2025 17:11:58 +0000 (0:00:00.732) 0:00:40.179 ****** 2025-09-19 17:14:11.442093 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.442104 | orchestrator | 2025-09-19 17:14:11.442115 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-19 17:14:11.442126 | orchestrator | Friday 19 September 2025 17:11:58 +0000 (0:00:00.233) 0:00:40.412 ****** 2025-09-19 17:14:11.442145 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.442156 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.442167 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.442177 | orchestrator | 2025-09-19 17:14:11.442188 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 17:14:11.442199 | orchestrator | Friday 19 September 2025 17:11:58 +0000 (0:00:00.252) 0:00:40.665 ****** 2025-09-19 17:14:11.442210 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:14:11.442221 | orchestrator | 2025-09-19 17:14:11.442313 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-19 17:14:11.442329 | orchestrator | Friday 19 September 2025 17:11:59 +0000 (0:00:00.488) 0:00:41.154 ****** 2025-09-19 17:14:11.442357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442412 | orchestrator | 2025-09-19 17:14:11.442423 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-19 17:14:11.442434 | orchestrator | Friday 19 September 2025 17:12:03 +0000 (0:00:03.725) 0:00:44.879 ****** 2025-09-19 17:14:11.442456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442468 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.442485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442503 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.442523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442536 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.442547 | orchestrator | 2025-09-19 17:14:11.442558 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-19 17:14:11.442569 | orchestrator | Friday 19 September 2025 17:12:07 +0000 (0:00:03.965) 0:00:48.845 ****** 2025-09-19 17:14:11.442581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442600 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.442619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442630 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.442642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-19 17:14:11.442661 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.442672 | orchestrator | 2025-09-19 17:14:11.442683 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-19 17:14:11.442726 | orchestrator | Friday 19 September 2025 17:12:11 +0000 (0:00:03.986) 0:00:52.831 ****** 2025-09-19 17:14:11.442738 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.442749 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.442760 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.442771 | orchestrator | 2025-09-19 17:14:11.442782 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-19 17:14:11.442793 | orchestrator | Friday 19 September 2025 17:12:15 +0000 (0:00:04.058) 0:00:56.890 ****** 2025-09-19 17:14:11.442821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.442872 | orchestrator | 2025-09-19 17:14:11.442882 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-19 17:14:11.442893 | orchestrator | Friday 19 September 2025 17:12:18 +0000 (0:00:03.493) 0:01:00.383 ****** 2025-09-19 17:14:11.442904 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:14:11.442915 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:14:11.442926 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.442936 | orchestrator | 2025-09-19 17:14:11.442985 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-19 17:14:11.442997 | orchestrator | Friday 19 September 2025 17:12:27 +0000 (0:00:09.002) 0:01:09.386 ****** 2025-09-19 17:14:11.443010 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443022 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443034 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443045 | orchestrator | 2025-09-19 17:14:11.443058 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-19 17:14:11.443077 | orchestrator | Friday 19 September 2025 17:12:32 +0000 (0:00:04.847) 0:01:14.233 ****** 2025-09-19 17:14:11.443090 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443109 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443121 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443134 | orchestrator | 2025-09-19 17:14:11.443147 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-19 17:14:11.443159 | orchestrator | Friday 19 September 2025 17:12:36 +0000 (0:00:03.975) 0:01:18.209 ****** 2025-09-19 17:14:11.443171 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443183 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443195 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443208 | orchestrator | 2025-09-19 17:14:11.443221 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-19 17:14:11.443233 | orchestrator | Friday 19 September 2025 17:12:39 +0000 (0:00:03.202) 0:01:21.412 ****** 2025-09-19 17:14:11.443245 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443257 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443269 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443281 | orchestrator | 2025-09-19 17:14:11.443294 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-19 17:14:11.443306 | orchestrator | Friday 19 September 2025 17:12:46 +0000 (0:00:06.429) 0:01:27.841 ****** 2025-09-19 17:14:11.443317 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443328 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443339 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443349 | orchestrator | 2025-09-19 17:14:11.443360 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-19 17:14:11.443371 | orchestrator | Friday 19 September 2025 17:12:46 +0000 (0:00:00.365) 0:01:28.207 ****** 2025-09-19 17:14:11.443382 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 17:14:11.443393 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443404 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 17:14:11.443414 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443425 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-19 17:14:11.443436 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443447 | orchestrator | 2025-09-19 17:14:11.443458 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-19 17:14:11.443469 | orchestrator | Friday 19 September 2025 17:12:50 +0000 (0:00:04.134) 0:01:32.342 ****** 2025-09-19 17:14:11.443485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.443512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.443525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-19 17:14:11.443537 | orchestrator | 2025-09-19 17:14:11.443554 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-19 17:14:11.443565 | orchestrator | Friday 19 September 2025 17:12:55 +0000 (0:00:05.067) 0:01:37.409 ****** 2025-09-19 17:14:11.443576 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:11.443586 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:11.443603 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:11.443614 | orchestrator | 2025-09-19 17:14:11.443625 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-19 17:14:11.443636 | orchestrator | Friday 19 September 2025 17:12:56 +0000 (0:00:00.547) 0:01:37.956 ****** 2025-09-19 17:14:11.443647 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.443657 | orchestrator | 2025-09-19 17:14:11.443668 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-19 17:14:11.443679 | orchestrator | Friday 19 September 2025 17:12:58 +0000 (0:00:02.129) 0:01:40.086 ****** 2025-09-19 17:14:11.443690 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.443701 | orchestrator | 2025-09-19 17:14:11.443712 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-19 17:14:11.443722 | orchestrator | Friday 19 September 2025 17:13:00 +0000 (0:00:02.195) 0:01:42.283 ****** 2025-09-19 17:14:11.443733 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.443744 | orchestrator | 2025-09-19 17:14:11.443755 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-19 17:14:11.443766 | orchestrator | Friday 19 September 2025 17:13:02 +0000 (0:00:02.090) 0:01:44.373 ****** 2025-09-19 17:14:11.443776 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.443787 | orchestrator | 2025-09-19 17:14:11.443798 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-19 17:14:11.443809 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:30.357) 0:02:14.731 ****** 2025-09-19 17:14:11.443820 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.443831 | orchestrator | 2025-09-19 17:14:11.443847 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 17:14:11.443858 | orchestrator | Friday 19 September 2025 17:13:35 +0000 (0:00:02.212) 0:02:16.943 ****** 2025-09-19 17:14:11.443869 | orchestrator | 2025-09-19 17:14:11.443880 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 17:14:11.443891 | orchestrator | Friday 19 September 2025 17:13:35 +0000 (0:00:00.117) 0:02:17.061 ****** 2025-09-19 17:14:11.443902 | orchestrator | 2025-09-19 17:14:11.443912 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-19 17:14:11.443923 | orchestrator | Friday 19 September 2025 17:13:35 +0000 (0:00:00.091) 0:02:17.152 ****** 2025-09-19 17:14:11.443934 | orchestrator | 2025-09-19 17:14:11.444102 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-19 17:14:11.444115 | orchestrator | Friday 19 September 2025 17:13:35 +0000 (0:00:00.066) 0:02:17.219 ****** 2025-09-19 17:14:11.444126 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:11.444137 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:14:11.444147 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:14:11.444157 | orchestrator | 2025-09-19 17:14:11.444166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:14:11.444177 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-19 17:14:11.444188 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:14:11.444198 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:14:11.444208 | orchestrator | 2025-09-19 17:14:11.444217 | orchestrator | 2025-09-19 17:14:11.444227 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:14:11.444236 | orchestrator | Friday 19 September 2025 17:14:09 +0000 (0:00:33.710) 0:02:50.929 ****** 2025-09-19 17:14:11.444246 | orchestrator | =============================================================================== 2025-09-19 17:14:11.444256 | orchestrator | glance : Restart glance-api container ---------------------------------- 33.71s 2025-09-19 17:14:11.444265 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.36s 2025-09-19 17:14:11.444283 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.00s 2025-09-19 17:14:11.444293 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 7.19s 2025-09-19 17:14:11.444302 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.43s 2025-09-19 17:14:11.444312 | orchestrator | glance : Check glance containers ---------------------------------------- 5.07s 2025-09-19 17:14:11.444321 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.85s 2025-09-19 17:14:11.444331 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.31s 2025-09-19 17:14:11.444340 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.26s 2025-09-19 17:14:11.444350 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.20s 2025-09-19 17:14:11.444359 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.13s 2025-09-19 17:14:11.444369 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.06s 2025-09-19 17:14:11.444378 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.99s 2025-09-19 17:14:11.444388 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.98s 2025-09-19 17:14:11.444397 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.97s 2025-09-19 17:14:11.444407 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.73s 2025-09-19 17:14:11.444422 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.72s 2025-09-19 17:14:11.444432 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.67s 2025-09-19 17:14:11.444441 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.50s 2025-09-19 17:14:11.444451 | orchestrator | glance : Copying over config.json files for services -------------------- 3.49s 2025-09-19 17:14:11.444460 | orchestrator | 2025-09-19 17:14:11 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:11.444470 | orchestrator | 2025-09-19 17:14:11 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:11.444485 | orchestrator | 2025-09-19 17:14:11 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:11.445500 | orchestrator | 2025-09-19 17:14:11 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:11.445673 | orchestrator | 2025-09-19 17:14:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:14.489325 | orchestrator | 2025-09-19 17:14:14 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:14.497460 | orchestrator | 2025-09-19 17:14:14 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:14.500057 | orchestrator | 2025-09-19 17:14:14 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:14.503354 | orchestrator | 2025-09-19 17:14:14 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:14.503890 | orchestrator | 2025-09-19 17:14:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:17.544042 | orchestrator | 2025-09-19 17:14:17 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:17.545164 | orchestrator | 2025-09-19 17:14:17 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:17.546630 | orchestrator | 2025-09-19 17:14:17 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:17.548126 | orchestrator | 2025-09-19 17:14:17 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:17.548172 | orchestrator | 2025-09-19 17:14:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:20.599357 | orchestrator | 2025-09-19 17:14:20 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:20.600462 | orchestrator | 2025-09-19 17:14:20 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:20.601782 | orchestrator | 2025-09-19 17:14:20 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:20.603627 | orchestrator | 2025-09-19 17:14:20 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:20.603680 | orchestrator | 2025-09-19 17:14:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:23.657773 | orchestrator | 2025-09-19 17:14:23 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:23.659349 | orchestrator | 2025-09-19 17:14:23 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:23.660799 | orchestrator | 2025-09-19 17:14:23 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:23.662521 | orchestrator | 2025-09-19 17:14:23 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:23.662567 | orchestrator | 2025-09-19 17:14:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:26.715305 | orchestrator | 2025-09-19 17:14:26 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:26.716468 | orchestrator | 2025-09-19 17:14:26 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:26.718364 | orchestrator | 2025-09-19 17:14:26 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:26.720305 | orchestrator | 2025-09-19 17:14:26 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:26.720338 | orchestrator | 2025-09-19 17:14:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:29.765679 | orchestrator | 2025-09-19 17:14:29 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:29.766860 | orchestrator | 2025-09-19 17:14:29 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:29.770681 | orchestrator | 2025-09-19 17:14:29 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:29.772173 | orchestrator | 2025-09-19 17:14:29 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:29.772203 | orchestrator | 2025-09-19 17:14:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:32.818571 | orchestrator | 2025-09-19 17:14:32 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:32.819095 | orchestrator | 2025-09-19 17:14:32 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:32.821784 | orchestrator | 2025-09-19 17:14:32 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:32.824401 | orchestrator | 2025-09-19 17:14:32 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:32.824445 | orchestrator | 2025-09-19 17:14:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:35.866807 | orchestrator | 2025-09-19 17:14:35 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state STARTED 2025-09-19 17:14:35.868990 | orchestrator | 2025-09-19 17:14:35 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:35.871579 | orchestrator | 2025-09-19 17:14:35 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:35.873861 | orchestrator | 2025-09-19 17:14:35 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:35.873988 | orchestrator | 2025-09-19 17:14:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:38.923622 | orchestrator | 2025-09-19 17:14:38 | INFO  | Task 759f001f-25c3-45c5-8be1-8641cb955aec is in state SUCCESS 2025-09-19 17:14:38.923723 | orchestrator | 2025-09-19 17:14:38.926164 | orchestrator | 2025-09-19 17:14:38.926199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:14:38.926212 | orchestrator | 2025-09-19 17:14:38.926223 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:14:38.926235 | orchestrator | Friday 19 September 2025 17:11:43 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-19 17:14:38.926246 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:14:38.926258 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:14:38.926292 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:14:38.926304 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:14:38.926314 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:14:38.926325 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:14:38.926336 | orchestrator | 2025-09-19 17:14:38.926346 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:14:38.926357 | orchestrator | Friday 19 September 2025 17:11:44 +0000 (0:00:00.668) 0:00:00.949 ****** 2025-09-19 17:14:38.926368 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-19 17:14:38.926379 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-19 17:14:38.926389 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-19 17:14:38.926400 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-19 17:14:38.926410 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-19 17:14:38.926421 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-19 17:14:38.926432 | orchestrator | 2025-09-19 17:14:38.926443 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-19 17:14:38.926454 | orchestrator | 2025-09-19 17:14:38.926464 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 17:14:38.926475 | orchestrator | Friday 19 September 2025 17:11:44 +0000 (0:00:00.569) 0:00:01.519 ****** 2025-09-19 17:14:38.926486 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:14:38.926498 | orchestrator | 2025-09-19 17:14:38.926509 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-19 17:14:38.926520 | orchestrator | Friday 19 September 2025 17:11:46 +0000 (0:00:01.861) 0:00:03.380 ****** 2025-09-19 17:14:38.926532 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-19 17:14:38.926542 | orchestrator | 2025-09-19 17:14:38.926553 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-19 17:14:38.926564 | orchestrator | Friday 19 September 2025 17:11:50 +0000 (0:00:03.944) 0:00:07.328 ****** 2025-09-19 17:14:38.926575 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-19 17:14:38.926586 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-19 17:14:38.926597 | orchestrator | 2025-09-19 17:14:38.926608 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-19 17:14:38.926618 | orchestrator | Friday 19 September 2025 17:11:57 +0000 (0:00:07.125) 0:00:14.453 ****** 2025-09-19 17:14:38.926629 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:14:38.926640 | orchestrator | 2025-09-19 17:14:38.926651 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-19 17:14:38.926662 | orchestrator | Friday 19 September 2025 17:12:01 +0000 (0:00:03.444) 0:00:17.898 ****** 2025-09-19 17:14:38.926696 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:14:38.926707 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-19 17:14:38.926718 | orchestrator | 2025-09-19 17:14:38.926729 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-19 17:14:38.926755 | orchestrator | Friday 19 September 2025 17:12:05 +0000 (0:00:04.145) 0:00:22.043 ****** 2025-09-19 17:14:38.926768 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:14:38.926780 | orchestrator | 2025-09-19 17:14:38.926792 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-19 17:14:38.926804 | orchestrator | Friday 19 September 2025 17:12:09 +0000 (0:00:03.886) 0:00:25.929 ****** 2025-09-19 17:14:38.926816 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-19 17:14:38.926828 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-19 17:14:38.926840 | orchestrator | 2025-09-19 17:14:38.926853 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-19 17:14:38.926865 | orchestrator | Friday 19 September 2025 17:12:17 +0000 (0:00:08.305) 0:00:34.235 ****** 2025-09-19 17:14:38.926880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.926913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.926927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.926964 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.926988 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927076 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.927116 | orchestrator | 2025-09-19 17:14:38.927148 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 17:14:38.927160 | orchestrator | Friday 19 September 2025 17:12:19 +0000 (0:00:02.092) 0:00:36.328 ****** 2025-09-19 17:14:38.927171 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.927182 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.927193 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.927203 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.927214 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.927224 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.927235 | orchestrator | 2025-09-19 17:14:38.927246 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 17:14:38.927256 | orchestrator | Friday 19 September 2025 17:12:21 +0000 (0:00:01.515) 0:00:37.843 ****** 2025-09-19 17:14:38.927267 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.927278 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.927288 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.927299 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:14:38.927316 | orchestrator | 2025-09-19 17:14:38.927327 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-19 17:14:38.927337 | orchestrator | Friday 19 September 2025 17:12:23 +0000 (0:00:02.444) 0:00:40.288 ****** 2025-09-19 17:14:38.927348 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-19 17:14:38.927359 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-19 17:14:38.927376 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-19 17:14:38.927387 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-19 17:14:38.927397 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-19 17:14:38.927408 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-19 17:14:38.927419 | orchestrator | 2025-09-19 17:14:38.927429 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-19 17:14:38.927440 | orchestrator | Friday 19 September 2025 17:12:26 +0000 (0:00:02.961) 0:00:43.249 ****** 2025-09-19 17:14:38.927452 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927470 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927483 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927500 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927512 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927529 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-19 17:14:38.927546 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927558 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927576 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927594 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927606 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927622 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-19 17:14:38.927634 | orchestrator | 2025-09-19 17:14:38.927645 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-19 17:14:38.927656 | orchestrator | Friday 19 September 2025 17:12:30 +0000 (0:00:03.740) 0:00:46.990 ****** 2025-09-19 17:14:38.927667 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:38.927678 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:38.927689 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-19 17:14:38.927700 | orchestrator | 2025-09-19 17:14:38.927710 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-19 17:14:38.927721 | orchestrator | Friday 19 September 2025 17:12:33 +0000 (0:00:02.636) 0:00:49.626 ****** 2025-09-19 17:14:38.927732 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-19 17:14:38.927743 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-19 17:14:38.927753 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-19 17:14:38.927764 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:14:38.927775 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:14:38.927790 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-19 17:14:38.927808 | orchestrator | 2025-09-19 17:14:38.927819 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-19 17:14:38.927830 | orchestrator | Friday 19 September 2025 17:12:36 +0000 (0:00:03.358) 0:00:52.985 ****** 2025-09-19 17:14:38.927840 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-19 17:14:38.927851 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-19 17:14:38.927862 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-19 17:14:38.927872 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-19 17:14:38.927883 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-19 17:14:38.927894 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-19 17:14:38.927904 | orchestrator | 2025-09-19 17:14:38.927915 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-19 17:14:38.927926 | orchestrator | Friday 19 September 2025 17:12:37 +0000 (0:00:01.038) 0:00:54.023 ****** 2025-09-19 17:14:38.927937 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.927975 | orchestrator | 2025-09-19 17:14:38.927986 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-19 17:14:38.927997 | orchestrator | Friday 19 September 2025 17:12:37 +0000 (0:00:00.145) 0:00:54.168 ****** 2025-09-19 17:14:38.928014 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.928025 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.928036 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.928047 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.928057 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.928068 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.928078 | orchestrator | 2025-09-19 17:14:38.928089 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 17:14:38.928100 | orchestrator | Friday 19 September 2025 17:12:38 +0000 (0:00:00.816) 0:00:54.985 ****** 2025-09-19 17:14:38.928112 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:14:38.928124 | orchestrator | 2025-09-19 17:14:38.928134 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-19 17:14:38.928145 | orchestrator | Friday 19 September 2025 17:12:39 +0000 (0:00:00.924) 0:00:55.910 ****** 2025-09-19 17:14:38.928157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.928174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.928213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.928234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928337 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928477 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928514 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.928533 | orchestrator | 2025-09-19 17:14:38.928552 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-19 17:14:38.928579 | orchestrator | Friday 19 September 2025 17:12:42 +0000 (0:00:03.551) 0:00:59.461 ****** 2025-09-19 17:14:38.928599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.928639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928658 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.928677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.928696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928714 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.928732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.928758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928787 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.928805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928851 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.928868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.928905 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.928931 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929040 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929075 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.929099 | orchestrator | 2025-09-19 17:14:38.929117 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-19 17:14:38.929137 | orchestrator | Friday 19 September 2025 17:12:45 +0000 (0:00:02.855) 0:01:02.316 ****** 2025-09-19 17:14:38.929173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.929199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929217 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.929235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.929276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.929314 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.929345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929370 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.929395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929431 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.929470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929490 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929508 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.929537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.929569 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.929586 | orchestrator | 2025-09-19 17:14:38.929602 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-19 17:14:38.929618 | orchestrator | Friday 19 September 2025 17:12:47 +0000 (0:00:01.540) 0:01:03.857 ****** 2025-09-19 17:14:38.929634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.929669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.929687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.929732 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929807 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929870 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.929880 | orchestrator | 2025-09-19 17:14:38.929890 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-19 17:14:38.929900 | orchestrator | Friday 19 September 2025 17:12:51 +0000 (0:00:03.712) 0:01:07.569 ****** 2025-09-19 17:14:38.929909 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 17:14:38.929920 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.929929 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 17:14:38.929939 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.929981 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-19 17:14:38.929998 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.930014 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 17:14:38.930058 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 17:14:38.930068 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-19 17:14:38.930077 | orchestrator | 2025-09-19 17:14:38.930087 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-19 17:14:38.930101 | orchestrator | Friday 19 September 2025 17:12:52 +0000 (0:00:01.978) 0:01:09.547 ****** 2025-09-19 17:14:38.930118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.930172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.930203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.930229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930257 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.930355 | orchestrator | 2025-09-19 17:14:38.930365 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-19 17:14:38.930375 | orchestrator | Friday 19 September 2025 17:13:01 +0000 (0:00:08.279) 0:01:17.827 ****** 2025-09-19 17:14:38.930390 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.930400 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.930409 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.930419 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:14:38.930428 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:14:38.930438 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:14:38.930447 | orchestrator | 2025-09-19 17:14:38.930464 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-19 17:14:38.930474 | orchestrator | Friday 19 September 2025 17:13:03 +0000 (0:00:02.456) 0:01:20.284 ****** 2025-09-19 17:14:38.930484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.930495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930505 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.930519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.930530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-19 17:14:38.930548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930575 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.930592 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.930608 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930642 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.930670 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930737 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.930767 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-19 17:14:38.930816 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.930834 | orchestrator | 2025-09-19 17:14:38.930849 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-19 17:14:38.930864 | orchestrator | Friday 19 September 2025 17:13:06 +0000 (0:00:02.793) 0:01:23.077 ****** 2025-09-19 17:14:38.930874 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.930884 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.930893 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.930903 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.930912 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.930921 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.930931 | orchestrator | 2025-09-19 17:14:38.930940 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-19 17:14:38.930973 | orchestrator | Friday 19 September 2025 17:13:07 +0000 (0:00:00.600) 0:01:23.678 ****** 2025-09-19 17:14:38.930989 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.931026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.931037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-19 17:14:38.931047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931065 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-19 17:14:38.931153 | orchestrator | 2025-09-19 17:14:38.931163 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-19 17:14:38.931179 | orchestrator | Friday 19 September 2025 17:13:10 +0000 (0:00:02.909) 0:01:26.587 ****** 2025-09-19 17:14:38.931188 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.931198 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:14:38.931207 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:14:38.931217 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:14:38.931226 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:14:38.931235 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:14:38.931245 | orchestrator | 2025-09-19 17:14:38.931254 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-19 17:14:38.931264 | orchestrator | Friday 19 September 2025 17:13:10 +0000 (0:00:00.547) 0:01:27.135 ****** 2025-09-19 17:14:38.931273 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:38.931283 | orchestrator | 2025-09-19 17:14:38.931292 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-19 17:14:38.931302 | orchestrator | Friday 19 September 2025 17:13:13 +0000 (0:00:02.760) 0:01:29.895 ****** 2025-09-19 17:14:38.931311 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:38.931320 | orchestrator | 2025-09-19 17:14:38.931330 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-19 17:14:38.931339 | orchestrator | Friday 19 September 2025 17:13:15 +0000 (0:00:02.369) 0:01:32.264 ****** 2025-09-19 17:14:38.931349 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:38.931358 | orchestrator | 2025-09-19 17:14:38.931368 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931378 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:17.626) 0:01:49.891 ****** 2025-09-19 17:14:38.931387 | orchestrator | 2025-09-19 17:14:38.931402 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931411 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.081) 0:01:49.972 ****** 2025-09-19 17:14:38.931421 | orchestrator | 2025-09-19 17:14:38.931430 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931440 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.061) 0:01:50.034 ****** 2025-09-19 17:14:38.931449 | orchestrator | 2025-09-19 17:14:38.931459 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931468 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.066) 0:01:50.100 ****** 2025-09-19 17:14:38.931477 | orchestrator | 2025-09-19 17:14:38.931487 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931496 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.065) 0:01:50.166 ****** 2025-09-19 17:14:38.931506 | orchestrator | 2025-09-19 17:14:38.931515 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-19 17:14:38.931524 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.068) 0:01:50.235 ****** 2025-09-19 17:14:38.931534 | orchestrator | 2025-09-19 17:14:38.931543 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-19 17:14:38.931553 | orchestrator | Friday 19 September 2025 17:13:33 +0000 (0:00:00.069) 0:01:50.304 ****** 2025-09-19 17:14:38.931562 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:38.931571 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:14:38.931581 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:14:38.931590 | orchestrator | 2025-09-19 17:14:38.931600 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-19 17:14:38.931609 | orchestrator | Friday 19 September 2025 17:13:52 +0000 (0:00:18.292) 0:02:08.596 ****** 2025-09-19 17:14:38.931619 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:14:38.931628 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:14:38.931637 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:14:38.931647 | orchestrator | 2025-09-19 17:14:38.931656 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-19 17:14:38.931666 | orchestrator | Friday 19 September 2025 17:13:58 +0000 (0:00:06.120) 0:02:14.717 ****** 2025-09-19 17:14:38.931681 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:14:38.931691 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:14:38.931700 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:14:38.931709 | orchestrator | 2025-09-19 17:14:38.931719 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-19 17:14:38.931728 | orchestrator | Friday 19 September 2025 17:14:31 +0000 (0:00:33.689) 0:02:48.406 ****** 2025-09-19 17:14:38.931738 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:14:38.931747 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:14:38.931756 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:14:38.931766 | orchestrator | 2025-09-19 17:14:38.931775 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-19 17:14:38.931785 | orchestrator | Friday 19 September 2025 17:14:37 +0000 (0:00:05.947) 0:02:54.353 ****** 2025-09-19 17:14:38.931795 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:14:38.931804 | orchestrator | 2025-09-19 17:14:38.931814 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:14:38.931824 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-19 17:14:38.931834 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 17:14:38.931848 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 17:14:38.931858 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 17:14:38.931868 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 17:14:38.931877 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-19 17:14:38.931887 | orchestrator | 2025-09-19 17:14:38.931896 | orchestrator | 2025-09-19 17:14:38.931906 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:14:38.931915 | orchestrator | Friday 19 September 2025 17:14:38 +0000 (0:00:00.633) 0:02:54.986 ****** 2025-09-19 17:14:38.931925 | orchestrator | =============================================================================== 2025-09-19 17:14:38.931934 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 33.69s 2025-09-19 17:14:38.931997 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 18.29s 2025-09-19 17:14:38.932008 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.63s 2025-09-19 17:14:38.932017 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.31s 2025-09-19 17:14:38.932027 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.28s 2025-09-19 17:14:38.932036 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.13s 2025-09-19 17:14:38.932046 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.12s 2025-09-19 17:14:38.932055 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.95s 2025-09-19 17:14:38.932071 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2025-09-19 17:14:38.932081 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.94s 2025-09-19 17:14:38.932090 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.89s 2025-09-19 17:14:38.932100 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.74s 2025-09-19 17:14:38.932109 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.71s 2025-09-19 17:14:38.932126 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.55s 2025-09-19 17:14:38.932136 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.44s 2025-09-19 17:14:38.932144 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.36s 2025-09-19 17:14:38.932151 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.96s 2025-09-19 17:14:38.932159 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.91s 2025-09-19 17:14:38.932167 | orchestrator | service-cert-copy : cinder | Copying over backend internal TLS certificate --- 2.86s 2025-09-19 17:14:38.932175 | orchestrator | cinder : Copying over existing policy file ------------------------------ 2.79s 2025-09-19 17:14:38.932183 | orchestrator | 2025-09-19 17:14:38 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:38.932191 | orchestrator | 2025-09-19 17:14:38 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:38.932199 | orchestrator | 2025-09-19 17:14:38 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:38.932207 | orchestrator | 2025-09-19 17:14:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:41.988443 | orchestrator | 2025-09-19 17:14:41 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:41.989630 | orchestrator | 2025-09-19 17:14:41 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:41.993564 | orchestrator | 2025-09-19 17:14:41 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:41.997106 | orchestrator | 2025-09-19 17:14:41 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:41.997148 | orchestrator | 2025-09-19 17:14:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:45.076401 | orchestrator | 2025-09-19 17:14:45 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:45.077575 | orchestrator | 2025-09-19 17:14:45 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:45.079144 | orchestrator | 2025-09-19 17:14:45 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:45.080410 | orchestrator | 2025-09-19 17:14:45 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:45.080591 | orchestrator | 2025-09-19 17:14:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:48.122315 | orchestrator | 2025-09-19 17:14:48 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:48.122734 | orchestrator | 2025-09-19 17:14:48 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:48.123966 | orchestrator | 2025-09-19 17:14:48 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:48.125042 | orchestrator | 2025-09-19 17:14:48 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:48.125128 | orchestrator | 2025-09-19 17:14:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:51.166742 | orchestrator | 2025-09-19 17:14:51 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:51.168499 | orchestrator | 2025-09-19 17:14:51 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:51.171198 | orchestrator | 2025-09-19 17:14:51 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:51.172134 | orchestrator | 2025-09-19 17:14:51 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:51.172565 | orchestrator | 2025-09-19 17:14:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:54.216458 | orchestrator | 2025-09-19 17:14:54 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:54.220066 | orchestrator | 2025-09-19 17:14:54 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:54.222175 | orchestrator | 2025-09-19 17:14:54 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:54.224491 | orchestrator | 2025-09-19 17:14:54 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:54.224780 | orchestrator | 2025-09-19 17:14:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:14:57.269134 | orchestrator | 2025-09-19 17:14:57 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:14:57.270117 | orchestrator | 2025-09-19 17:14:57 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:14:57.271399 | orchestrator | 2025-09-19 17:14:57 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:14:57.274094 | orchestrator | 2025-09-19 17:14:57 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:14:57.274166 | orchestrator | 2025-09-19 17:14:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:00.314784 | orchestrator | 2025-09-19 17:15:00 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:00.316676 | orchestrator | 2025-09-19 17:15:00 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:00.317086 | orchestrator | 2025-09-19 17:15:00 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:00.318304 | orchestrator | 2025-09-19 17:15:00 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:15:00.318331 | orchestrator | 2025-09-19 17:15:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:03.365266 | orchestrator | 2025-09-19 17:15:03 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:03.368308 | orchestrator | 2025-09-19 17:15:03 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:03.370809 | orchestrator | 2025-09-19 17:15:03 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:03.373202 | orchestrator | 2025-09-19 17:15:03 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:15:03.373553 | orchestrator | 2025-09-19 17:15:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:06.426756 | orchestrator | 2025-09-19 17:15:06 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:06.428270 | orchestrator | 2025-09-19 17:15:06 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:06.430373 | orchestrator | 2025-09-19 17:15:06 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:06.432649 | orchestrator | 2025-09-19 17:15:06 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state STARTED 2025-09-19 17:15:06.432679 | orchestrator | 2025-09-19 17:15:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:09.478474 | orchestrator | 2025-09-19 17:15:09 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:09.479804 | orchestrator | 2025-09-19 17:15:09 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:09.481289 | orchestrator | 2025-09-19 17:15:09 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:09.482314 | orchestrator | 2025-09-19 17:15:09 | INFO  | Task 1978c272-f4cb-4237-b225-09ef49e2ae1a is in state SUCCESS 2025-09-19 17:15:09.482416 | orchestrator | 2025-09-19 17:15:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:12.532174 | orchestrator | 2025-09-19 17:15:12 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:12.532828 | orchestrator | 2025-09-19 17:15:12 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:12.533930 | orchestrator | 2025-09-19 17:15:12 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:12.533972 | orchestrator | 2025-09-19 17:15:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:15.581537 | orchestrator | 2025-09-19 17:15:15 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:15.582753 | orchestrator | 2025-09-19 17:15:15 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:15.585604 | orchestrator | 2025-09-19 17:15:15 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:15.585661 | orchestrator | 2025-09-19 17:15:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:18.633327 | orchestrator | 2025-09-19 17:15:18 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:18.635441 | orchestrator | 2025-09-19 17:15:18 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:18.637023 | orchestrator | 2025-09-19 17:15:18 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:18.637052 | orchestrator | 2025-09-19 17:15:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:21.687309 | orchestrator | 2025-09-19 17:15:21 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:21.688738 | orchestrator | 2025-09-19 17:15:21 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:21.691426 | orchestrator | 2025-09-19 17:15:21 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:21.691773 | orchestrator | 2025-09-19 17:15:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:24.730550 | orchestrator | 2025-09-19 17:15:24 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:24.732278 | orchestrator | 2025-09-19 17:15:24 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:24.733577 | orchestrator | 2025-09-19 17:15:24 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:24.733862 | orchestrator | 2025-09-19 17:15:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:27.782700 | orchestrator | 2025-09-19 17:15:27 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:27.786506 | orchestrator | 2025-09-19 17:15:27 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:27.787640 | orchestrator | 2025-09-19 17:15:27 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:27.787718 | orchestrator | 2025-09-19 17:15:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:30.832442 | orchestrator | 2025-09-19 17:15:30 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:30.834270 | orchestrator | 2025-09-19 17:15:30 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:30.838276 | orchestrator | 2025-09-19 17:15:30 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:30.838352 | orchestrator | 2025-09-19 17:15:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:33.889869 | orchestrator | 2025-09-19 17:15:33 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:33.891264 | orchestrator | 2025-09-19 17:15:33 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:33.893501 | orchestrator | 2025-09-19 17:15:33 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:33.894072 | orchestrator | 2025-09-19 17:15:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:36.940461 | orchestrator | 2025-09-19 17:15:36 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:36.941704 | orchestrator | 2025-09-19 17:15:36 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:36.944256 | orchestrator | 2025-09-19 17:15:36 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:36.944491 | orchestrator | 2025-09-19 17:15:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:39.997326 | orchestrator | 2025-09-19 17:15:39 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:39.999426 | orchestrator | 2025-09-19 17:15:39 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:40.001328 | orchestrator | 2025-09-19 17:15:39 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:40.001543 | orchestrator | 2025-09-19 17:15:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:43.051247 | orchestrator | 2025-09-19 17:15:43 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:43.051739 | orchestrator | 2025-09-19 17:15:43 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:43.056088 | orchestrator | 2025-09-19 17:15:43 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:43.056129 | orchestrator | 2025-09-19 17:15:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:46.090075 | orchestrator | 2025-09-19 17:15:46 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:46.095316 | orchestrator | 2025-09-19 17:15:46 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:46.096590 | orchestrator | 2025-09-19 17:15:46 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:46.096801 | orchestrator | 2025-09-19 17:15:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:49.152361 | orchestrator | 2025-09-19 17:15:49 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:49.154378 | orchestrator | 2025-09-19 17:15:49 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:49.156420 | orchestrator | 2025-09-19 17:15:49 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:49.156474 | orchestrator | 2025-09-19 17:15:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:52.214382 | orchestrator | 2025-09-19 17:15:52 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:52.216085 | orchestrator | 2025-09-19 17:15:52 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:52.217879 | orchestrator | 2025-09-19 17:15:52 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:52.218362 | orchestrator | 2025-09-19 17:15:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:55.265823 | orchestrator | 2025-09-19 17:15:55 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:55.266222 | orchestrator | 2025-09-19 17:15:55 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:55.267056 | orchestrator | 2025-09-19 17:15:55 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:55.267080 | orchestrator | 2025-09-19 17:15:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:15:58.314656 | orchestrator | 2025-09-19 17:15:58 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:15:58.318286 | orchestrator | 2025-09-19 17:15:58 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:15:58.322231 | orchestrator | 2025-09-19 17:15:58 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:15:58.322303 | orchestrator | 2025-09-19 17:15:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:01.376607 | orchestrator | 2025-09-19 17:16:01 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:01.379054 | orchestrator | 2025-09-19 17:16:01 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:01.380567 | orchestrator | 2025-09-19 17:16:01 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:16:01.380948 | orchestrator | 2025-09-19 17:16:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:04.424020 | orchestrator | 2025-09-19 17:16:04 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:04.425613 | orchestrator | 2025-09-19 17:16:04 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:04.430260 | orchestrator | 2025-09-19 17:16:04 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:16:04.430297 | orchestrator | 2025-09-19 17:16:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:07.484429 | orchestrator | 2025-09-19 17:16:07 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:07.485392 | orchestrator | 2025-09-19 17:16:07 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:07.487578 | orchestrator | 2025-09-19 17:16:07 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:16:07.487628 | orchestrator | 2025-09-19 17:16:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:10.543864 | orchestrator | 2025-09-19 17:16:10 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:10.546221 | orchestrator | 2025-09-19 17:16:10 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:10.547566 | orchestrator | 2025-09-19 17:16:10 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:16:10.547610 | orchestrator | 2025-09-19 17:16:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:13.600441 | orchestrator | 2025-09-19 17:16:13 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:13.697319 | orchestrator | 2025-09-19 17:16:13 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:13.697393 | orchestrator | 2025-09-19 17:16:13 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state STARTED 2025-09-19 17:16:13.697407 | orchestrator | 2025-09-19 17:16:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:16.643731 | orchestrator | 2025-09-19 17:16:16 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:16.646322 | orchestrator | 2025-09-19 17:16:16 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:16.647241 | orchestrator | 2025-09-19 17:16:16 | INFO  | Task 202117fe-95a9-4b32-af6f-6d5a3e490d7d is in state SUCCESS 2025-09-19 17:16:16.647272 | orchestrator | 2025-09-19 17:16:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:19.705014 | orchestrator | 2025-09-19 17:16:19 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:19.705120 | orchestrator | 2025-09-19 17:16:19 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:19.705135 | orchestrator | 2025-09-19 17:16:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:22.745449 | orchestrator | 2025-09-19 17:16:22 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:22.745548 | orchestrator | 2025-09-19 17:16:22 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:22.745562 | orchestrator | 2025-09-19 17:16:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:25.791924 | orchestrator | 2025-09-19 17:16:25 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:25.793950 | orchestrator | 2025-09-19 17:16:25 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:25.794232 | orchestrator | 2025-09-19 17:16:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:28.845428 | orchestrator | 2025-09-19 17:16:28 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:28.847477 | orchestrator | 2025-09-19 17:16:28 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:28.847544 | orchestrator | 2025-09-19 17:16:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:31.888528 | orchestrator | 2025-09-19 17:16:31 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:31.889302 | orchestrator | 2025-09-19 17:16:31 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:31.889345 | orchestrator | 2025-09-19 17:16:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:34.940290 | orchestrator | 2025-09-19 17:16:34 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:34.942528 | orchestrator | 2025-09-19 17:16:34 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:34.942562 | orchestrator | 2025-09-19 17:16:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:37.987559 | orchestrator | 2025-09-19 17:16:37 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:37.987662 | orchestrator | 2025-09-19 17:16:37 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:37.987676 | orchestrator | 2025-09-19 17:16:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:41.031636 | orchestrator | 2025-09-19 17:16:41 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:41.032621 | orchestrator | 2025-09-19 17:16:41 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:41.032709 | orchestrator | 2025-09-19 17:16:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:44.087711 | orchestrator | 2025-09-19 17:16:44 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:44.088287 | orchestrator | 2025-09-19 17:16:44 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:44.088323 | orchestrator | 2025-09-19 17:16:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:47.136675 | orchestrator | 2025-09-19 17:16:47 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:47.137885 | orchestrator | 2025-09-19 17:16:47 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:47.137912 | orchestrator | 2025-09-19 17:16:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:50.178966 | orchestrator | 2025-09-19 17:16:50 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:50.182485 | orchestrator | 2025-09-19 17:16:50 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:50.182574 | orchestrator | 2025-09-19 17:16:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:53.226763 | orchestrator | 2025-09-19 17:16:53 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:53.228525 | orchestrator | 2025-09-19 17:16:53 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:53.228569 | orchestrator | 2025-09-19 17:16:53 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:56.268731 | orchestrator | 2025-09-19 17:16:56 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state STARTED 2025-09-19 17:16:56.268821 | orchestrator | 2025-09-19 17:16:56 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:56.268836 | orchestrator | 2025-09-19 17:16:56 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:16:59.303522 | orchestrator | 2025-09-19 17:16:59 | INFO  | Task 8da55636-a79f-4dd9-9540-d58f0ffdf787 is in state SUCCESS 2025-09-19 17:16:59.304449 | orchestrator | 2025-09-19 17:16:59.304482 | orchestrator | 2025-09-19 17:16:59.304495 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:16:59.304507 | orchestrator | 2025-09-19 17:16:59.304519 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:16:59.304530 | orchestrator | Friday 19 September 2025 17:14:13 +0000 (0:00:00.296) 0:00:00.296 ****** 2025-09-19 17:16:59.304542 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.304585 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:16:59.304598 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:16:59.304609 | orchestrator | 2025-09-19 17:16:59.304621 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:16:59.304632 | orchestrator | Friday 19 September 2025 17:14:13 +0000 (0:00:00.313) 0:00:00.609 ****** 2025-09-19 17:16:59.304644 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-19 17:16:59.304655 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-19 17:16:59.304666 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-19 17:16:59.304677 | orchestrator | 2025-09-19 17:16:59.304715 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-19 17:16:59.304728 | orchestrator | 2025-09-19 17:16:59.304739 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 17:16:59.304813 | orchestrator | Friday 19 September 2025 17:14:14 +0000 (0:00:00.436) 0:00:01.045 ****** 2025-09-19 17:16:59.304956 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:16:59.304970 | orchestrator | 2025-09-19 17:16:59.305029 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-19 17:16:59.305040 | orchestrator | Friday 19 September 2025 17:14:14 +0000 (0:00:00.590) 0:00:01.636 ****** 2025-09-19 17:16:59.305107 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-19 17:16:59.305122 | orchestrator | 2025-09-19 17:16:59.305134 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-19 17:16:59.305161 | orchestrator | Friday 19 September 2025 17:14:18 +0000 (0:00:03.830) 0:00:05.467 ****** 2025-09-19 17:16:59.305174 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-19 17:16:59.305187 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-19 17:16:59.305199 | orchestrator | 2025-09-19 17:16:59.305211 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-19 17:16:59.305224 | orchestrator | Friday 19 September 2025 17:14:25 +0000 (0:00:06.304) 0:00:11.771 ****** 2025-09-19 17:16:59.305251 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:16:59.305264 | orchestrator | 2025-09-19 17:16:59.305276 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-19 17:16:59.305288 | orchestrator | Friday 19 September 2025 17:14:28 +0000 (0:00:03.205) 0:00:14.977 ****** 2025-09-19 17:16:59.305301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:16:59.305324 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 17:16:59.305352 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-19 17:16:59.305364 | orchestrator | 2025-09-19 17:16:59.305376 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-19 17:16:59.305389 | orchestrator | Friday 19 September 2025 17:14:36 +0000 (0:00:08.681) 0:00:23.658 ****** 2025-09-19 17:16:59.305427 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:16:59.305440 | orchestrator | 2025-09-19 17:16:59.305483 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-19 17:16:59.305494 | orchestrator | Friday 19 September 2025 17:14:40 +0000 (0:00:03.618) 0:00:27.277 ****** 2025-09-19 17:16:59.305505 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 17:16:59.305516 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-19 17:16:59.305526 | orchestrator | 2025-09-19 17:16:59.305537 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-19 17:16:59.305548 | orchestrator | Friday 19 September 2025 17:14:48 +0000 (0:00:07.958) 0:00:35.235 ****** 2025-09-19 17:16:59.305558 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-19 17:16:59.305569 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-19 17:16:59.305579 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-19 17:16:59.305590 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-19 17:16:59.305600 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-19 17:16:59.305611 | orchestrator | 2025-09-19 17:16:59.305622 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-19 17:16:59.305633 | orchestrator | Friday 19 September 2025 17:15:04 +0000 (0:00:16.239) 0:00:51.475 ****** 2025-09-19 17:16:59.305643 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:16:59.305654 | orchestrator | 2025-09-19 17:16:59.305665 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-19 17:16:59.305676 | orchestrator | Friday 19 September 2025 17:15:05 +0000 (0:00:00.546) 0:00:52.021 ****** 2025-09-19 17:16:59.305702 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request."} 2025-09-19 17:16:59.305779 | orchestrator | 2025-09-19 17:16:59.305792 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:16:59.305804 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.305817 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.305828 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.305839 | orchestrator | 2025-09-19 17:16:59.305850 | orchestrator | 2025-09-19 17:16:59.305861 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:16:59.305872 | orchestrator | Friday 19 September 2025 17:15:08 +0000 (0:00:03.457) 0:00:55.479 ****** 2025-09-19 17:16:59.305882 | orchestrator | =============================================================================== 2025-09-19 17:16:59.305893 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.24s 2025-09-19 17:16:59.305904 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.68s 2025-09-19 17:16:59.305915 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.96s 2025-09-19 17:16:59.305926 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.30s 2025-09-19 17:16:59.305937 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.83s 2025-09-19 17:16:59.305947 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.62s 2025-09-19 17:16:59.305964 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.46s 2025-09-19 17:16:59.306118 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.21s 2025-09-19 17:16:59.306133 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.59s 2025-09-19 17:16:59.306144 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.55s 2025-09-19 17:16:59.306154 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-09-19 17:16:59.306165 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-19 17:16:59.306175 | orchestrator | 2025-09-19 17:16:59.306186 | orchestrator | 2025-09-19 17:16:59.306197 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:16:59.306207 | orchestrator | 2025-09-19 17:16:59.306218 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:16:59.306229 | orchestrator | Friday 19 September 2025 17:13:29 +0000 (0:00:00.177) 0:00:00.177 ****** 2025-09-19 17:16:59.306240 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.306251 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:16:59.306261 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:16:59.306272 | orchestrator | 2025-09-19 17:16:59.306283 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:16:59.306294 | orchestrator | Friday 19 September 2025 17:13:29 +0000 (0:00:00.312) 0:00:00.489 ****** 2025-09-19 17:16:59.306305 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 17:16:59.306316 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 17:16:59.306327 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 17:16:59.306337 | orchestrator | 2025-09-19 17:16:59.306348 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-19 17:16:59.306359 | orchestrator | 2025-09-19 17:16:59.306370 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-19 17:16:59.306381 | orchestrator | Friday 19 September 2025 17:13:30 +0000 (0:00:00.597) 0:00:01.087 ****** 2025-09-19 17:16:59.306391 | orchestrator | 2025-09-19 17:16:59.306402 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-09-19 17:16:59.306425 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.306436 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:16:59.306446 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:16:59.306457 | orchestrator | 2025-09-19 17:16:59.306468 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:16:59.306479 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.306490 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.306501 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:16:59.306511 | orchestrator | 2025-09-19 17:16:59.306522 | orchestrator | 2025-09-19 17:16:59.306533 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:16:59.306544 | orchestrator | Friday 19 September 2025 17:16:15 +0000 (0:02:44.731) 0:02:45.819 ****** 2025-09-19 17:16:59.306555 | orchestrator | =============================================================================== 2025-09-19 17:16:59.306565 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 164.73s 2025-09-19 17:16:59.306576 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-09-19 17:16:59.306586 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-19 17:16:59.306597 | orchestrator | 2025-09-19 17:16:59.306607 | orchestrator | 2025-09-19 17:16:59.306618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:16:59.306629 | orchestrator | 2025-09-19 17:16:59.306650 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:16:59.306662 | orchestrator | Friday 19 September 2025 17:14:42 +0000 (0:00:00.283) 0:00:00.283 ****** 2025-09-19 17:16:59.306672 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.306683 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:16:59.306694 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:16:59.306704 | orchestrator | 2025-09-19 17:16:59.306715 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:16:59.306726 | orchestrator | Friday 19 September 2025 17:14:43 +0000 (0:00:00.344) 0:00:00.627 ****** 2025-09-19 17:16:59.306737 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-19 17:16:59.306747 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-19 17:16:59.306758 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-19 17:16:59.306769 | orchestrator | 2025-09-19 17:16:59.306779 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-19 17:16:59.306790 | orchestrator | 2025-09-19 17:16:59.306801 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 17:16:59.306811 | orchestrator | Friday 19 September 2025 17:14:43 +0000 (0:00:00.478) 0:00:01.105 ****** 2025-09-19 17:16:59.306822 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:16:59.306833 | orchestrator | 2025-09-19 17:16:59.306844 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-19 17:16:59.306854 | orchestrator | Friday 19 September 2025 17:14:44 +0000 (0:00:00.635) 0:00:01.740 ****** 2025-09-19 17:16:59.306873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.306899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.306911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.306922 | orchestrator | 2025-09-19 17:16:59.306933 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-19 17:16:59.306944 | orchestrator | Friday 19 September 2025 17:14:45 +0000 (0:00:01.083) 0:00:02.824 ****** 2025-09-19 17:16:59.306955 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-19 17:16:59.306965 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-19 17:16:59.306999 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:16:59.307010 | orchestrator | 2025-09-19 17:16:59.307021 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-19 17:16:59.307032 | orchestrator | Friday 19 September 2025 17:14:46 +0000 (0:00:00.854) 0:00:03.678 ****** 2025-09-19 17:16:59.307042 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:16:59.307053 | orchestrator | 2025-09-19 17:16:59.307064 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-19 17:16:59.307075 | orchestrator | Friday 19 September 2025 17:14:46 +0000 (0:00:00.561) 0:00:04.240 ****** 2025-09-19 17:16:59.307105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307154 | orchestrator | 2025-09-19 17:16:59.307165 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-19 17:16:59.307176 | orchestrator | Friday 19 September 2025 17:14:48 +0000 (0:00:01.276) 0:00:05.517 ****** 2025-09-19 17:16:59.307187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307198 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.307210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307221 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.307232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307243 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.307254 | orchestrator | 2025-09-19 17:16:59.307271 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-19 17:16:59.307283 | orchestrator | Friday 19 September 2025 17:14:48 +0000 (0:00:00.325) 0:00:05.842 ****** 2025-09-19 17:16:59.307294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307312 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.307328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307340 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.307351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-19 17:16:59.307362 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.307373 | orchestrator | 2025-09-19 17:16:59.307384 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-19 17:16:59.307395 | orchestrator | Friday 19 September 2025 17:14:49 +0000 (0:00:00.754) 0:00:06.597 ****** 2025-09-19 17:16:59.307406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307449 | orchestrator | 2025-09-19 17:16:59.307460 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-19 17:16:59.307477 | orchestrator | Friday 19 September 2025 17:14:50 +0000 (0:00:01.246) 0:00:07.843 ****** 2025-09-19 17:16:59.307488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.307528 | orchestrator | 2025-09-19 17:16:59.307539 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-19 17:16:59.307550 | orchestrator | Friday 19 September 2025 17:14:51 +0000 (0:00:01.301) 0:00:09.145 ****** 2025-09-19 17:16:59.307561 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.307572 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.307583 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.307594 | orchestrator | 2025-09-19 17:16:59.307604 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-19 17:16:59.307615 | orchestrator | Friday 19 September 2025 17:14:52 +0000 (0:00:00.383) 0:00:09.529 ****** 2025-09-19 17:16:59.307626 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 17:16:59.307637 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 17:16:59.307647 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-19 17:16:59.307658 | orchestrator | 2025-09-19 17:16:59.307669 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-19 17:16:59.307680 | orchestrator | Friday 19 September 2025 17:14:53 +0000 (0:00:01.257) 0:00:10.786 ****** 2025-09-19 17:16:59.307691 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 17:16:59.307702 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 17:16:59.307713 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-19 17:16:59.307731 | orchestrator | 2025-09-19 17:16:59.307742 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-19 17:16:59.307753 | orchestrator | Friday 19 September 2025 17:14:54 +0000 (0:00:01.232) 0:00:12.019 ****** 2025-09-19 17:16:59.307763 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:16:59.307774 | orchestrator | 2025-09-19 17:16:59.307785 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-19 17:16:59.307802 | orchestrator | Friday 19 September 2025 17:14:55 +0000 (0:00:00.714) 0:00:12.733 ****** 2025-09-19 17:16:59.307813 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-19 17:16:59.307823 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-19 17:16:59.307834 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.307845 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:16:59.307855 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:16:59.307866 | orchestrator | 2025-09-19 17:16:59.307877 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-19 17:16:59.307888 | orchestrator | Friday 19 September 2025 17:14:55 +0000 (0:00:00.664) 0:00:13.397 ****** 2025-09-19 17:16:59.307899 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.307910 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.307920 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.307931 | orchestrator | 2025-09-19 17:16:59.307941 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-19 17:16:59.307952 | orchestrator | Friday 19 September 2025 17:14:56 +0000 (0:00:00.415) 0:00:13.813 ****** 2025-09-19 17:16:59.307965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1104784, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1104784, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308053 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1104784, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1104832, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8824496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1104832, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8824496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1104832, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8824496, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104795, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8694804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104795, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8694804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1104795, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8694804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308149 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1104834, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8843186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1104834, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8843186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308182 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1104834, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8843186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1104809, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.875692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1104809, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.875692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1104809, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.875692, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1104825, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8809047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1104825, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8809047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1104825, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8809047, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104782, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8634648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104782, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8634648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1104782, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8634648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104789, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104789, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1104789, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8664804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104797, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8709044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104797, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8709044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1104797, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8709044, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1104819, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.877668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1104819, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.877668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1104819, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.877668, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1104830, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.882015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1104830, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.882015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1104830, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.882015, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104793, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8684802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104793, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8684802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1104793, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8684802, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1104823, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8794804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1104823, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8794804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1104823, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8794804, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1104813, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8764803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1104813, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8764803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1104813, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8764803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308592 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1104806, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8751042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1104806, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8751042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1104806, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8751042, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1104802, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8734803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1104802, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8734803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1104802, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8734803, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1104821, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8786476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1104821, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8786476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1104821, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8786476, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308708 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1104799, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8714814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1104799, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8714814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1104799, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8714814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1104829, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8817272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1104829, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8817272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1104829, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8817272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104926, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9159348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104926, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9159348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1104926, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9159348, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104861, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8952885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104861, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8952885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1104861, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8952885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308863 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104849, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8864806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104849, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8864806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1104849, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8864806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1104878, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8974807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1104878, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8974807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1104878, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8974807, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104842, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8846657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104842, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8846657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1104842, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8846657, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.308998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104897, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9086185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104897, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9086185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1104897, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9086185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104880, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9054809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104880, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9054809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1104880, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9054809, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1104899, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9094477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1104899, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9094477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1104899, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9094477, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104917, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9153996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104917, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9153996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1104917, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.9153996, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1104894, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1104894, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1104894, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104875, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104875, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1104875, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104858, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104858, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1104858, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104873, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8954806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104873, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8954806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1104873, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8954806, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104851, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104851, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1104851, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.889506, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1104876, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1104876, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1104876, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8964808, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104913, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.913481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104913, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.913481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1104913, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.913481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104907, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.911481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104907, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.911481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1104907, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.911481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104843, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.885012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309547 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104843, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.885012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1104843, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.885012, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104846, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8859532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309582 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104846, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8859532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1104846, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.8859532, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104892, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104892, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1104892, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.907481, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1104904, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.91009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1104904, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.91009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1104904, 'dev': 126, 'nlink': 1, 'atime': 1758240129.0, 'mtime': 1758240129.0, 'ctime': 1758299156.91009, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-19 17:16:59.309686 | orchestrator | 2025-09-19 17:16:59.309696 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-19 17:16:59.309706 | orchestrator | Friday 19 September 2025 17:15:34 +0000 (0:00:38.245) 0:00:52.058 ****** 2025-09-19 17:16:59.309721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.309732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.309743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-19 17:16:59.309752 | orchestrator | 2025-09-19 17:16:59.309762 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-19 17:16:59.309772 | orchestrator | Friday 19 September 2025 17:15:35 +0000 (0:00:01.098) 0:00:53.157 ****** 2025-09-19 17:16:59.309782 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:16:59.309791 | orchestrator | 2025-09-19 17:16:59.309801 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-19 17:16:59.309815 | orchestrator | Friday 19 September 2025 17:15:37 +0000 (0:00:02.317) 0:00:55.475 ****** 2025-09-19 17:16:59.309824 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:16:59.309834 | orchestrator | 2025-09-19 17:16:59.309843 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 17:16:59.309853 | orchestrator | Friday 19 September 2025 17:15:40 +0000 (0:00:02.375) 0:00:57.850 ****** 2025-09-19 17:16:59.309862 | orchestrator | 2025-09-19 17:16:59.309872 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 17:16:59.309881 | orchestrator | Friday 19 September 2025 17:15:40 +0000 (0:00:00.080) 0:00:57.931 ****** 2025-09-19 17:16:59.309891 | orchestrator | 2025-09-19 17:16:59.309900 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-19 17:16:59.309914 | orchestrator | Friday 19 September 2025 17:15:40 +0000 (0:00:00.069) 0:00:58.001 ****** 2025-09-19 17:16:59.309924 | orchestrator | 2025-09-19 17:16:59.309934 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-19 17:16:59.309943 | orchestrator | Friday 19 September 2025 17:15:40 +0000 (0:00:00.244) 0:00:58.245 ****** 2025-09-19 17:16:59.309952 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.309962 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.309989 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:16:59.309999 | orchestrator | 2025-09-19 17:16:59.310009 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-19 17:16:59.310043 | orchestrator | Friday 19 September 2025 17:15:42 +0000 (0:00:01.842) 0:01:00.087 ****** 2025-09-19 17:16:59.310055 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.310065 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.310075 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-19 17:16:59.310084 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-19 17:16:59.310094 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-09-19 17:16:59.310104 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.310113 | orchestrator | 2025-09-19 17:16:59.310123 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-19 17:16:59.310132 | orchestrator | Friday 19 September 2025 17:16:21 +0000 (0:00:39.289) 0:01:39.377 ****** 2025-09-19 17:16:59.310142 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.310151 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:16:59.310161 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:16:59.310170 | orchestrator | 2025-09-19 17:16:59.310180 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-19 17:16:59.310190 | orchestrator | Friday 19 September 2025 17:16:52 +0000 (0:00:30.933) 0:02:10.310 ****** 2025-09-19 17:16:59.310199 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:16:59.310209 | orchestrator | 2025-09-19 17:16:59.310224 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-19 17:16:59.310234 | orchestrator | Friday 19 September 2025 17:16:55 +0000 (0:00:02.522) 0:02:12.833 ****** 2025-09-19 17:16:59.310244 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.310254 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:16:59.310263 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:16:59.310273 | orchestrator | 2025-09-19 17:16:59.310283 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-19 17:16:59.310292 | orchestrator | Friday 19 September 2025 17:16:55 +0000 (0:00:00.502) 0:02:13.335 ****** 2025-09-19 17:16:59.310304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-19 17:16:59.310322 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-19 17:16:59.310334 | orchestrator | 2025-09-19 17:16:59.310343 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-19 17:16:59.310353 | orchestrator | Friday 19 September 2025 17:16:58 +0000 (0:00:02.774) 0:02:16.110 ****** 2025-09-19 17:16:59.310363 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:16:59.310372 | orchestrator | 2025-09-19 17:16:59.310382 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:16:59.310391 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 17:16:59.310401 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 17:16:59.310411 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-19 17:16:59.310421 | orchestrator | 2025-09-19 17:16:59.310431 | orchestrator | 2025-09-19 17:16:59.310440 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:16:59.310450 | orchestrator | Friday 19 September 2025 17:16:58 +0000 (0:00:00.284) 0:02:16.394 ****** 2025-09-19 17:16:59.310459 | orchestrator | =============================================================================== 2025-09-19 17:16:59.310468 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 39.29s 2025-09-19 17:16:59.310478 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.25s 2025-09-19 17:16:59.310487 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.93s 2025-09-19 17:16:59.310497 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.77s 2025-09-19 17:16:59.310506 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.52s 2025-09-19 17:16:59.310516 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.38s 2025-09-19 17:16:59.310525 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.32s 2025-09-19 17:16:59.310541 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.84s 2025-09-19 17:16:59.310551 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.30s 2025-09-19 17:16:59.310560 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.28s 2025-09-19 17:16:59.310569 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.26s 2025-09-19 17:16:59.310579 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.25s 2025-09-19 17:16:59.310589 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.23s 2025-09-19 17:16:59.310598 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.10s 2025-09-19 17:16:59.310608 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.08s 2025-09-19 17:16:59.310617 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.85s 2025-09-19 17:16:59.310627 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.75s 2025-09-19 17:16:59.310636 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2025-09-19 17:16:59.310646 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.66s 2025-09-19 17:16:59.310655 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.64s 2025-09-19 17:16:59.310665 | orchestrator | 2025-09-19 17:16:59 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:16:59.310680 | orchestrator | 2025-09-19 17:16:59 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:02.338279 | orchestrator | 2025-09-19 17:17:02 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:02.338394 | orchestrator | 2025-09-19 17:17:02 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:05.377551 | orchestrator | 2025-09-19 17:17:05 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:05.377641 | orchestrator | 2025-09-19 17:17:05 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:08.406095 | orchestrator | 2025-09-19 17:17:08 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:08.406208 | orchestrator | 2025-09-19 17:17:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:11.450403 | orchestrator | 2025-09-19 17:17:11 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:11.450506 | orchestrator | 2025-09-19 17:17:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:14.488925 | orchestrator | 2025-09-19 17:17:14 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:14.489045 | orchestrator | 2025-09-19 17:17:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:17.526969 | orchestrator | 2025-09-19 17:17:17 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:17.527114 | orchestrator | 2025-09-19 17:17:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:20.565242 | orchestrator | 2025-09-19 17:17:20 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:20.565334 | orchestrator | 2025-09-19 17:17:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:23.600601 | orchestrator | 2025-09-19 17:17:23 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:23.600705 | orchestrator | 2025-09-19 17:17:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:26.645014 | orchestrator | 2025-09-19 17:17:26 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:26.645133 | orchestrator | 2025-09-19 17:17:26 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:29.679421 | orchestrator | 2025-09-19 17:17:29 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:29.679511 | orchestrator | 2025-09-19 17:17:29 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:32.722185 | orchestrator | 2025-09-19 17:17:32 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:32.722277 | orchestrator | 2025-09-19 17:17:32 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:35.759555 | orchestrator | 2025-09-19 17:17:35 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:35.759654 | orchestrator | 2025-09-19 17:17:35 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:38.805873 | orchestrator | 2025-09-19 17:17:38 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:38.805961 | orchestrator | 2025-09-19 17:17:38 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:41.857489 | orchestrator | 2025-09-19 17:17:41 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:41.857570 | orchestrator | 2025-09-19 17:17:41 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:44.902695 | orchestrator | 2025-09-19 17:17:44 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:44.902838 | orchestrator | 2025-09-19 17:17:44 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:47.946557 | orchestrator | 2025-09-19 17:17:47 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:47.946660 | orchestrator | 2025-09-19 17:17:47 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:50.997776 | orchestrator | 2025-09-19 17:17:50 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:50.997877 | orchestrator | 2025-09-19 17:17:50 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:54.044807 | orchestrator | 2025-09-19 17:17:54 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:54.044909 | orchestrator | 2025-09-19 17:17:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:17:57.087873 | orchestrator | 2025-09-19 17:17:57 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:17:57.088029 | orchestrator | 2025-09-19 17:17:57 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:00.128171 | orchestrator | 2025-09-19 17:18:00 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:00.128294 | orchestrator | 2025-09-19 17:18:00 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:03.176706 | orchestrator | 2025-09-19 17:18:03 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:03.176813 | orchestrator | 2025-09-19 17:18:03 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:06.226474 | orchestrator | 2025-09-19 17:18:06 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:06.226582 | orchestrator | 2025-09-19 17:18:06 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:09.271816 | orchestrator | 2025-09-19 17:18:09 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:09.271926 | orchestrator | 2025-09-19 17:18:09 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:12.316796 | orchestrator | 2025-09-19 17:18:12 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:12.316890 | orchestrator | 2025-09-19 17:18:12 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:15.364228 | orchestrator | 2025-09-19 17:18:15 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:15.364322 | orchestrator | 2025-09-19 17:18:15 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:18.412895 | orchestrator | 2025-09-19 17:18:18 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:18.413055 | orchestrator | 2025-09-19 17:18:18 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:21.461121 | orchestrator | 2025-09-19 17:18:21 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:21.461233 | orchestrator | 2025-09-19 17:18:21 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:24.505447 | orchestrator | 2025-09-19 17:18:24 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:24.505547 | orchestrator | 2025-09-19 17:18:24 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:27.552452 | orchestrator | 2025-09-19 17:18:27 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:27.552566 | orchestrator | 2025-09-19 17:18:27 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:30.591715 | orchestrator | 2025-09-19 17:18:30 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:30.591846 | orchestrator | 2025-09-19 17:18:30 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:33.634414 | orchestrator | 2025-09-19 17:18:33 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:33.634522 | orchestrator | 2025-09-19 17:18:33 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:36.678423 | orchestrator | 2025-09-19 17:18:36 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:36.678530 | orchestrator | 2025-09-19 17:18:36 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:39.725279 | orchestrator | 2025-09-19 17:18:39 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:39.725388 | orchestrator | 2025-09-19 17:18:39 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:42.776258 | orchestrator | 2025-09-19 17:18:42 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:42.776351 | orchestrator | 2025-09-19 17:18:42 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:45.821978 | orchestrator | 2025-09-19 17:18:45 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:45.822153 | orchestrator | 2025-09-19 17:18:45 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:48.867347 | orchestrator | 2025-09-19 17:18:48 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:48.867457 | orchestrator | 2025-09-19 17:18:48 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:51.912550 | orchestrator | 2025-09-19 17:18:51 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:51.912662 | orchestrator | 2025-09-19 17:18:51 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:54.953820 | orchestrator | 2025-09-19 17:18:54 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:54.953925 | orchestrator | 2025-09-19 17:18:54 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:18:58.019481 | orchestrator | 2025-09-19 17:18:58 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:18:58.019610 | orchestrator | 2025-09-19 17:18:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:01.051714 | orchestrator | 2025-09-19 17:19:01 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:01.051823 | orchestrator | 2025-09-19 17:19:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:04.086900 | orchestrator | 2025-09-19 17:19:04 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:04.086991 | orchestrator | 2025-09-19 17:19:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:07.133767 | orchestrator | 2025-09-19 17:19:07 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:07.133872 | orchestrator | 2025-09-19 17:19:07 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:10.182669 | orchestrator | 2025-09-19 17:19:10 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:10.182757 | orchestrator | 2025-09-19 17:19:10 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:13.228839 | orchestrator | 2025-09-19 17:19:13 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:13.228946 | orchestrator | 2025-09-19 17:19:13 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:16.274692 | orchestrator | 2025-09-19 17:19:16 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:16.274825 | orchestrator | 2025-09-19 17:19:16 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:19.317494 | orchestrator | 2025-09-19 17:19:19 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:19.317607 | orchestrator | 2025-09-19 17:19:19 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:22.357987 | orchestrator | 2025-09-19 17:19:22 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:22.358188 | orchestrator | 2025-09-19 17:19:22 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:25.401318 | orchestrator | 2025-09-19 17:19:25 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:25.401418 | orchestrator | 2025-09-19 17:19:25 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:28.450315 | orchestrator | 2025-09-19 17:19:28 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:28.450430 | orchestrator | 2025-09-19 17:19:28 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:31.494257 | orchestrator | 2025-09-19 17:19:31 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:31.494365 | orchestrator | 2025-09-19 17:19:31 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:34.543843 | orchestrator | 2025-09-19 17:19:34 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:34.543949 | orchestrator | 2025-09-19 17:19:34 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:37.589655 | orchestrator | 2025-09-19 17:19:37 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:37.589750 | orchestrator | 2025-09-19 17:19:37 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:40.631573 | orchestrator | 2025-09-19 17:19:40 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:40.631683 | orchestrator | 2025-09-19 17:19:40 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:43.676300 | orchestrator | 2025-09-19 17:19:43 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:43.676365 | orchestrator | 2025-09-19 17:19:43 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:46.699921 | orchestrator | 2025-09-19 17:19:46 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:46.699966 | orchestrator | 2025-09-19 17:19:46 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:49.738398 | orchestrator | 2025-09-19 17:19:49 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:49.738442 | orchestrator | 2025-09-19 17:19:49 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:52.781687 | orchestrator | 2025-09-19 17:19:52 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:52.781776 | orchestrator | 2025-09-19 17:19:52 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:55.825091 | orchestrator | 2025-09-19 17:19:55 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:55.825213 | orchestrator | 2025-09-19 17:19:55 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:19:58.872264 | orchestrator | 2025-09-19 17:19:58 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:19:58.872406 | orchestrator | 2025-09-19 17:19:58 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:01.919762 | orchestrator | 2025-09-19 17:20:01 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:01.919853 | orchestrator | 2025-09-19 17:20:01 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:04.966240 | orchestrator | 2025-09-19 17:20:04 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:04.966371 | orchestrator | 2025-09-19 17:20:04 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:08.025758 | orchestrator | 2025-09-19 17:20:08 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:08.025813 | orchestrator | 2025-09-19 17:20:08 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:11.069834 | orchestrator | 2025-09-19 17:20:11 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:11.069907 | orchestrator | 2025-09-19 17:20:11 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:14.107278 | orchestrator | 2025-09-19 17:20:14 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:14.107341 | orchestrator | 2025-09-19 17:20:14 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:17.156595 | orchestrator | 2025-09-19 17:20:17 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:17.156696 | orchestrator | 2025-09-19 17:20:17 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:20.206477 | orchestrator | 2025-09-19 17:20:20 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:20.206583 | orchestrator | 2025-09-19 17:20:20 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:23.256251 | orchestrator | 2025-09-19 17:20:23 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state STARTED 2025-09-19 17:20:23.256389 | orchestrator | 2025-09-19 17:20:23 | INFO  | Wait 1 second(s) until the next check 2025-09-19 17:20:26.314561 | orchestrator | 2025-09-19 17:20:26 | INFO  | Task 6266c3cc-d7d4-4bba-8a26-99fac2e019a4 is in state SUCCESS 2025-09-19 17:20:26.316568 | orchestrator | 2025-09-19 17:20:26.316709 | orchestrator | 2025-09-19 17:20:26.316729 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:20:26.316742 | orchestrator | 2025-09-19 17:20:26.316753 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-19 17:20:26.316765 | orchestrator | Friday 19 September 2025 17:12:11 +0000 (0:00:00.281) 0:00:00.281 ****** 2025-09-19 17:20:26.316777 | orchestrator | changed: [testbed-manager] 2025-09-19 17:20:26.316789 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.316859 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.316874 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.316885 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.316896 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.316907 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.316948 | orchestrator | 2025-09-19 17:20:26.316961 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:20:26.316972 | orchestrator | Friday 19 September 2025 17:12:12 +0000 (0:00:01.272) 0:00:01.554 ****** 2025-09-19 17:20:26.316983 | orchestrator | changed: [testbed-manager] 2025-09-19 17:20:26.316994 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317005 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.317016 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.317027 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.317062 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.317075 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.317086 | orchestrator | 2025-09-19 17:20:26.317097 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:20:26.317109 | orchestrator | Friday 19 September 2025 17:12:13 +0000 (0:00:01.011) 0:00:02.566 ****** 2025-09-19 17:20:26.317224 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-19 17:20:26.317239 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-19 17:20:26.317250 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-19 17:20:26.317261 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-19 17:20:26.317271 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-19 17:20:26.317282 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-19 17:20:26.317294 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-19 17:20:26.317305 | orchestrator | 2025-09-19 17:20:26.317315 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-19 17:20:26.317326 | orchestrator | 2025-09-19 17:20:26.317337 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 17:20:26.317348 | orchestrator | Friday 19 September 2025 17:12:14 +0000 (0:00:01.163) 0:00:03.730 ****** 2025-09-19 17:20:26.317359 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.317370 | orchestrator | 2025-09-19 17:20:26.317381 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-19 17:20:26.317405 | orchestrator | Friday 19 September 2025 17:12:15 +0000 (0:00:00.601) 0:00:04.332 ****** 2025-09-19 17:20:26.317417 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-19 17:20:26.317428 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-19 17:20:26.317455 | orchestrator | 2025-09-19 17:20:26.317465 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-19 17:20:26.317476 | orchestrator | Friday 19 September 2025 17:12:20 +0000 (0:00:04.751) 0:00:09.083 ****** 2025-09-19 17:20:26.317487 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:20:26.317498 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-19 17:20:26.317509 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317519 | orchestrator | 2025-09-19 17:20:26.317530 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 17:20:26.317541 | orchestrator | Friday 19 September 2025 17:12:24 +0000 (0:00:04.487) 0:00:13.570 ****** 2025-09-19 17:20:26.317552 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317563 | orchestrator | 2025-09-19 17:20:26.317674 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-19 17:20:26.317686 | orchestrator | Friday 19 September 2025 17:12:25 +0000 (0:00:00.746) 0:00:14.317 ****** 2025-09-19 17:20:26.317697 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317708 | orchestrator | 2025-09-19 17:20:26.317718 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-19 17:20:26.317729 | orchestrator | Friday 19 September 2025 17:12:27 +0000 (0:00:01.834) 0:00:16.152 ****** 2025-09-19 17:20:26.317740 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317751 | orchestrator | 2025-09-19 17:20:26.317762 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 17:20:26.317773 | orchestrator | Friday 19 September 2025 17:12:30 +0000 (0:00:03.006) 0:00:19.158 ****** 2025-09-19 17:20:26.317784 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.317795 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.317806 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.317817 | orchestrator | 2025-09-19 17:20:26.317828 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 17:20:26.317839 | orchestrator | Friday 19 September 2025 17:12:30 +0000 (0:00:00.649) 0:00:19.807 ****** 2025-09-19 17:20:26.317850 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.317862 | orchestrator | 2025-09-19 17:20:26.317873 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-19 17:20:26.317884 | orchestrator | Friday 19 September 2025 17:13:00 +0000 (0:00:30.017) 0:00:49.825 ****** 2025-09-19 17:20:26.317894 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.317915 | orchestrator | 2025-09-19 17:20:26.317927 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 17:20:26.317938 | orchestrator | Friday 19 September 2025 17:13:16 +0000 (0:00:15.696) 0:01:05.521 ****** 2025-09-19 17:20:26.317948 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.317959 | orchestrator | 2025-09-19 17:20:26.317970 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 17:20:26.317992 | orchestrator | Friday 19 September 2025 17:13:29 +0000 (0:00:12.826) 0:01:18.347 ****** 2025-09-19 17:20:26.318248 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.318269 | orchestrator | 2025-09-19 17:20:26.318281 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-19 17:20:26.318291 | orchestrator | Friday 19 September 2025 17:13:30 +0000 (0:00:01.061) 0:01:19.409 ****** 2025-09-19 17:20:26.318302 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.318313 | orchestrator | 2025-09-19 17:20:26.318324 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 17:20:26.318335 | orchestrator | Friday 19 September 2025 17:13:30 +0000 (0:00:00.477) 0:01:19.886 ****** 2025-09-19 17:20:26.318346 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.318357 | orchestrator | 2025-09-19 17:20:26.318368 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-19 17:20:26.318379 | orchestrator | Friday 19 September 2025 17:13:31 +0000 (0:00:00.506) 0:01:20.393 ****** 2025-09-19 17:20:26.318389 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.318400 | orchestrator | 2025-09-19 17:20:26.318411 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 17:20:26.318422 | orchestrator | Friday 19 September 2025 17:13:49 +0000 (0:00:18.026) 0:01:38.419 ****** 2025-09-19 17:20:26.318432 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.318443 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318454 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318465 | orchestrator | 2025-09-19 17:20:26.318475 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-19 17:20:26.318486 | orchestrator | 2025-09-19 17:20:26.318497 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-19 17:20:26.318508 | orchestrator | Friday 19 September 2025 17:13:49 +0000 (0:00:00.271) 0:01:38.691 ****** 2025-09-19 17:20:26.318518 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.318529 | orchestrator | 2025-09-19 17:20:26.318540 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-19 17:20:26.318551 | orchestrator | Friday 19 September 2025 17:13:50 +0000 (0:00:00.459) 0:01:39.150 ****** 2025-09-19 17:20:26.318562 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318573 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318584 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.318595 | orchestrator | 2025-09-19 17:20:26.318606 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-19 17:20:26.318616 | orchestrator | Friday 19 September 2025 17:13:52 +0000 (0:00:02.221) 0:01:41.372 ****** 2025-09-19 17:20:26.318627 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318638 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318648 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.318659 | orchestrator | 2025-09-19 17:20:26.318670 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 17:20:26.318689 | orchestrator | Friday 19 September 2025 17:13:54 +0000 (0:00:02.341) 0:01:43.713 ****** 2025-09-19 17:20:26.318700 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.318711 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318722 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318732 | orchestrator | 2025-09-19 17:20:26.318743 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 17:20:26.318763 | orchestrator | Friday 19 September 2025 17:13:55 +0000 (0:00:00.311) 0:01:44.024 ****** 2025-09-19 17:20:26.318774 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 17:20:26.318785 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318796 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 17:20:26.318807 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318818 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-19 17:20:26.318829 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-19 17:20:26.318840 | orchestrator | 2025-09-19 17:20:26.318851 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-19 17:20:26.318861 | orchestrator | Friday 19 September 2025 17:14:04 +0000 (0:00:09.838) 0:01:53.863 ****** 2025-09-19 17:20:26.318894 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.318905 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.318916 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.318926 | orchestrator | 2025-09-19 17:20:26.318938 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-19 17:20:26.318948 | orchestrator | Friday 19 September 2025 17:14:05 +0000 (0:00:00.347) 0:01:54.211 ****** 2025-09-19 17:20:26.318959 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-19 17:20:26.318970 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.318981 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-19 17:20:26.318991 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319002 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-19 17:20:26.319013 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319024 | orchestrator | 2025-09-19 17:20:26.319034 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 17:20:26.319099 | orchestrator | Friday 19 September 2025 17:14:05 +0000 (0:00:00.631) 0:01:54.842 ****** 2025-09-19 17:20:26.319111 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319122 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319132 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.319143 | orchestrator | 2025-09-19 17:20:26.319154 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-19 17:20:26.319165 | orchestrator | Friday 19 September 2025 17:14:06 +0000 (0:00:00.511) 0:01:55.353 ****** 2025-09-19 17:20:26.319176 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319186 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319197 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.319208 | orchestrator | 2025-09-19 17:20:26.319219 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-19 17:20:26.319230 | orchestrator | Friday 19 September 2025 17:14:07 +0000 (0:00:00.968) 0:01:56.322 ****** 2025-09-19 17:20:26.319241 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319252 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319282 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.319294 | orchestrator | 2025-09-19 17:20:26.319305 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-19 17:20:26.319316 | orchestrator | Friday 19 September 2025 17:14:09 +0000 (0:00:02.054) 0:01:58.376 ****** 2025-09-19 17:20:26.319327 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319338 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319348 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.319359 | orchestrator | 2025-09-19 17:20:26.319370 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 17:20:26.319381 | orchestrator | Friday 19 September 2025 17:14:29 +0000 (0:00:20.344) 0:02:18.720 ****** 2025-09-19 17:20:26.319392 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319403 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319413 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.319424 | orchestrator | 2025-09-19 17:20:26.319435 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 17:20:26.319455 | orchestrator | Friday 19 September 2025 17:14:42 +0000 (0:00:13.181) 0:02:31.902 ****** 2025-09-19 17:20:26.319465 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.319474 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319484 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319493 | orchestrator | 2025-09-19 17:20:26.319503 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-19 17:20:26.319513 | orchestrator | Friday 19 September 2025 17:14:44 +0000 (0:00:01.275) 0:02:33.177 ****** 2025-09-19 17:20:26.319522 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319532 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319542 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.319551 | orchestrator | 2025-09-19 17:20:26.319561 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-19 17:20:26.319571 | orchestrator | Friday 19 September 2025 17:14:56 +0000 (0:00:12.272) 0:02:45.450 ****** 2025-09-19 17:20:26.319581 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.319590 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319600 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319609 | orchestrator | 2025-09-19 17:20:26.319619 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-19 17:20:26.319648 | orchestrator | Friday 19 September 2025 17:14:57 +0000 (0:00:01.015) 0:02:46.465 ****** 2025-09-19 17:20:26.319658 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.319667 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.319677 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.319686 | orchestrator | 2025-09-19 17:20:26.319696 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-19 17:20:26.319705 | orchestrator | 2025-09-19 17:20:26.319715 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 17:20:26.319724 | orchestrator | Friday 19 September 2025 17:14:57 +0000 (0:00:00.498) 0:02:46.964 ****** 2025-09-19 17:20:26.319740 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.319750 | orchestrator | 2025-09-19 17:20:26.319760 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-19 17:20:26.319770 | orchestrator | Friday 19 September 2025 17:14:58 +0000 (0:00:00.607) 0:02:47.571 ****** 2025-09-19 17:20:26.319779 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-19 17:20:26.319789 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-19 17:20:26.319798 | orchestrator | 2025-09-19 17:20:26.319808 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-19 17:20:26.319817 | orchestrator | Friday 19 September 2025 17:15:01 +0000 (0:00:03.387) 0:02:50.959 ****** 2025-09-19 17:20:26.319827 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-19 17:20:26.319838 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-19 17:20:26.319848 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-19 17:20:26.319857 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-19 17:20:26.319867 | orchestrator | 2025-09-19 17:20:26.319876 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-19 17:20:26.319886 | orchestrator | Friday 19 September 2025 17:15:08 +0000 (0:00:06.882) 0:02:57.841 ****** 2025-09-19 17:20:26.319895 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-19 17:20:26.319905 | orchestrator | 2025-09-19 17:20:26.319914 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-19 17:20:26.319924 | orchestrator | Friday 19 September 2025 17:15:12 +0000 (0:00:03.344) 0:03:01.186 ****** 2025-09-19 17:20:26.319940 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-19 17:20:26.319949 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-19 17:20:26.319959 | orchestrator | 2025-09-19 17:20:26.319968 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-19 17:20:26.319978 | orchestrator | Friday 19 September 2025 17:15:16 +0000 (0:00:04.041) 0:03:05.228 ****** 2025-09-19 17:20:26.319987 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-19 17:20:26.319997 | orchestrator | 2025-09-19 17:20:26.320006 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-19 17:20:26.320016 | orchestrator | Friday 19 September 2025 17:15:19 +0000 (0:00:03.573) 0:03:08.801 ****** 2025-09-19 17:20:26.320025 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-19 17:20:26.320035 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-19 17:20:26.320064 | orchestrator | 2025-09-19 17:20:26.320074 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-19 17:20:26.320090 | orchestrator | Friday 19 September 2025 17:15:27 +0000 (0:00:07.932) 0:03:16.734 ****** 2025-09-19 17:20:26.320105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320197 | orchestrator | 2025-09-19 17:20:26.320207 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-19 17:20:26.320217 | orchestrator | Friday 19 September 2025 17:15:29 +0000 (0:00:01.276) 0:03:18.011 ****** 2025-09-19 17:20:26.320226 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.320236 | orchestrator | 2025-09-19 17:20:26.320246 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-19 17:20:26.320255 | orchestrator | Friday 19 September 2025 17:15:29 +0000 (0:00:00.134) 0:03:18.145 ****** 2025-09-19 17:20:26.320265 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.320274 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.320284 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.320293 | orchestrator | 2025-09-19 17:20:26.320303 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-19 17:20:26.320317 | orchestrator | Friday 19 September 2025 17:15:29 +0000 (0:00:00.289) 0:03:18.435 ****** 2025-09-19 17:20:26.320327 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-19 17:20:26.320337 | orchestrator | 2025-09-19 17:20:26.320347 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-19 17:20:26.320356 | orchestrator | Friday 19 September 2025 17:15:30 +0000 (0:00:00.891) 0:03:19.327 ****** 2025-09-19 17:20:26.320371 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.320381 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.320391 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.320400 | orchestrator | 2025-09-19 17:20:26.320410 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-19 17:20:26.320419 | orchestrator | Friday 19 September 2025 17:15:30 +0000 (0:00:00.315) 0:03:19.642 ****** 2025-09-19 17:20:26.320429 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.320439 | orchestrator | 2025-09-19 17:20:26.320449 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 17:20:26.320458 | orchestrator | Friday 19 September 2025 17:15:31 +0000 (0:00:00.534) 0:03:20.177 ****** 2025-09-19 17:20:26.320469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.320523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.320561 | orchestrator | 2025-09-19 17:20:26.320571 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 17:20:26.320581 | orchestrator | Friday 19 September 2025 17:15:33 +0000 (0:00:02.390) 0:03:22.567 ****** 2025-09-19 17:20:26.320592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320630 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.320641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320662 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.320679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320708 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.320718 | orchestrator | 2025-09-19 17:20:26.320727 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 17:20:26.320737 | orchestrator | Friday 19 September 2025 17:15:34 +0000 (0:00:00.855) 0:03:23.423 ****** 2025-09-19 17:20:26.320752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320774 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.320792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320820 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.320831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.320841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.320851 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.320861 | orchestrator | 2025-09-19 17:20:26.320871 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-19 17:20:26.320880 | orchestrator | Friday 19 September 2025 17:15:35 +0000 (0:00:00.865) 0:03:24.288 ****** 2025-09-19 17:20:26.320989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321085 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321106 | orchestrator | 2025-09-19 17:20:26.321116 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-19 17:20:26.321133 | orchestrator | Friday 19 September 2025 17:15:37 +0000 (0:00:02.437) 0:03:26.726 ****** 2025-09-19 17:20:26.321148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321230 | orchestrator | 2025-09-19 17:20:26.321241 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-19 17:20:26.321250 | orchestrator | Friday 19 September 2025 17:15:43 +0000 (0:00:05.535) 0:03:32.261 ****** 2025-09-19 17:20:26.321261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.321277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.321289 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.321299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.321317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.321328 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.321342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-19 17:20:26.321353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.321364 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.321373 | orchestrator | 2025-09-19 17:20:26.321383 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-19 17:20:26.321393 | orchestrator | Friday 19 September 2025 17:15:43 +0000 (0:00:00.601) 0:03:32.862 ****** 2025-09-19 17:20:26.321403 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.321413 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.321423 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.321433 | orchestrator | 2025-09-19 17:20:26.321448 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-19 17:20:26.321464 | orchestrator | Friday 19 September 2025 17:15:45 +0000 (0:00:01.708) 0:03:34.571 ****** 2025-09-19 17:20:26.321475 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.321485 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.321495 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.321505 | orchestrator | 2025-09-19 17:20:26.321515 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-19 17:20:26.321525 | orchestrator | Friday 19 September 2025 17:15:45 +0000 (0:00:00.315) 0:03:34.887 ****** 2025-09-19 17:20:26.321536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-19 17:20:26.321587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.321618 | orchestrator | 2025-09-19 17:20:26.321628 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 17:20:26.321638 | orchestrator | Friday 19 September 2025 17:15:48 +0000 (0:00:02.323) 0:03:37.210 ****** 2025-09-19 17:20:26.321648 | orchestrator | 2025-09-19 17:20:26.321663 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 17:20:26.321674 | orchestrator | Friday 19 September 2025 17:15:48 +0000 (0:00:00.134) 0:03:37.345 ****** 2025-09-19 17:20:26.321684 | orchestrator | 2025-09-19 17:20:26.321694 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-19 17:20:26.321703 | orchestrator | Friday 19 September 2025 17:15:48 +0000 (0:00:00.142) 0:03:37.487 ****** 2025-09-19 17:20:26.321713 | orchestrator | 2025-09-19 17:20:26.321723 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-19 17:20:26.321733 | orchestrator | Friday 19 September 2025 17:15:48 +0000 (0:00:00.133) 0:03:37.621 ****** 2025-09-19 17:20:26.321743 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.321752 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.321762 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.321772 | orchestrator | 2025-09-19 17:20:26.321782 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-19 17:20:26.321792 | orchestrator | Friday 19 September 2025 17:16:07 +0000 (0:00:18.467) 0:03:56.088 ****** 2025-09-19 17:20:26.321802 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.321811 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.321821 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.321831 | orchestrator | 2025-09-19 17:20:26.321841 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-19 17:20:26.321851 | orchestrator | 2025-09-19 17:20:26.321861 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 17:20:26.321871 | orchestrator | Friday 19 September 2025 17:16:12 +0000 (0:00:05.694) 0:04:01.783 ****** 2025-09-19 17:20:26.321887 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.321898 | orchestrator | 2025-09-19 17:20:26.321907 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 17:20:26.321917 | orchestrator | Friday 19 September 2025 17:16:13 +0000 (0:00:01.165) 0:04:02.948 ****** 2025-09-19 17:20:26.321927 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.321937 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.321947 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.321957 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.321967 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.321977 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.321987 | orchestrator | 2025-09-19 17:20:26.321997 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-19 17:20:26.322007 | orchestrator | Friday 19 September 2025 17:16:14 +0000 (0:00:00.618) 0:04:03.567 ****** 2025-09-19 17:20:26.322075 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.322088 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.322098 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.322108 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:20:26.322119 | orchestrator | 2025-09-19 17:20:26.322129 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-19 17:20:26.322147 | orchestrator | Friday 19 September 2025 17:16:15 +0000 (0:00:01.011) 0:04:04.579 ****** 2025-09-19 17:20:26.322156 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-19 17:20:26.322167 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-19 17:20:26.322177 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-19 17:20:26.322187 | orchestrator | 2025-09-19 17:20:26.322197 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-19 17:20:26.322207 | orchestrator | Friday 19 September 2025 17:16:16 +0000 (0:00:00.680) 0:04:05.259 ****** 2025-09-19 17:20:26.322217 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-19 17:20:26.322227 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-19 17:20:26.322237 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-19 17:20:26.322246 | orchestrator | 2025-09-19 17:20:26.322257 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-19 17:20:26.322267 | orchestrator | Friday 19 September 2025 17:16:17 +0000 (0:00:01.269) 0:04:06.529 ****** 2025-09-19 17:20:26.322277 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-19 17:20:26.322287 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.322297 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-19 17:20:26.322307 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.322316 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-19 17:20:26.322326 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.322336 | orchestrator | 2025-09-19 17:20:26.322346 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-19 17:20:26.322356 | orchestrator | Friday 19 September 2025 17:16:18 +0000 (0:00:00.712) 0:04:07.241 ****** 2025-09-19 17:20:26.322365 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 17:20:26.322375 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 17:20:26.322385 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.322395 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 17:20:26.322405 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 17:20:26.322415 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.322424 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-19 17:20:26.322445 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-19 17:20:26.322455 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.322465 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 17:20:26.322475 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 17:20:26.322489 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-19 17:20:26.322500 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 17:20:26.322510 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 17:20:26.322519 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-19 17:20:26.322529 | orchestrator | 2025-09-19 17:20:26.322539 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-19 17:20:26.322549 | orchestrator | Friday 19 September 2025 17:16:19 +0000 (0:00:01.076) 0:04:08.318 ****** 2025-09-19 17:20:26.322559 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.322569 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.322578 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.322588 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.322598 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.322608 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.322618 | orchestrator | 2025-09-19 17:20:26.322628 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-19 17:20:26.322637 | orchestrator | Friday 19 September 2025 17:16:20 +0000 (0:00:01.391) 0:04:09.709 ****** 2025-09-19 17:20:26.322647 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.322657 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.322667 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.322677 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.322687 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.322696 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.322706 | orchestrator | 2025-09-19 17:20:26.322716 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-19 17:20:26.322726 | orchestrator | Friday 19 September 2025 17:16:22 +0000 (0:00:01.578) 0:04:11.287 ****** 2025-09-19 17:20:26.322738 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322784 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322799 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322821 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322847 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322937 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.322973 | orchestrator | 2025-09-19 17:20:26.322983 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 17:20:26.322993 | orchestrator | Friday 19 September 2025 17:16:25 +0000 (0:00:03.005) 0:04:14.292 ****** 2025-09-19 17:20:26.323003 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:20:26.323015 | orchestrator | 2025-09-19 17:20:26.323024 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-19 17:20:26.323034 | orchestrator | Friday 19 September 2025 17:16:26 +0000 (0:00:01.428) 0:04:15.721 ****** 2025-09-19 17:20:26.323106 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323135 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323188 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323210 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323267 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323276 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323284 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.323297 | orchestrator | 2025-09-19 17:20:26.323305 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-19 17:20:26.323313 | orchestrator | Friday 19 September 2025 17:16:30 +0000 (0:00:03.524) 0:04:19.245 ****** 2025-09-19 17:20:26.323327 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323349 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323358 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.323366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323374 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323401 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.323409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323418 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323432 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323441 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.323449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323471 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.323485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323502 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.323511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323531 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.323539 | orchestrator | 2025-09-19 17:20:26.323547 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-19 17:20:26.323555 | orchestrator | Friday 19 September 2025 17:16:31 +0000 (0:00:01.592) 0:04:20.838 ****** 2025-09-19 17:20:26.323563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323594 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323603 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.323611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323641 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.323654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.323667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.323675 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323684 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.323692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323712 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.323721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323743 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.323751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.323765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.323774 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.323782 | orchestrator | 2025-09-19 17:20:26.323790 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 17:20:26.323798 | orchestrator | Friday 19 September 2025 17:16:33 +0000 (0:00:02.077) 0:04:22.915 ****** 2025-09-19 17:20:26.323806 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.323814 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.323822 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.323830 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-19 17:20:26.323838 | orchestrator | 2025-09-19 17:20:26.323846 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-19 17:20:26.323854 | orchestrator | Friday 19 September 2025 17:16:34 +0000 (0:00:00.978) 0:04:23.894 ****** 2025-09-19 17:20:26.323862 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 17:20:26.323869 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 17:20:26.323877 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 17:20:26.323885 | orchestrator | 2025-09-19 17:20:26.323893 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-19 17:20:26.323901 | orchestrator | Friday 19 September 2025 17:16:35 +0000 (0:00:00.925) 0:04:24.819 ****** 2025-09-19 17:20:26.323909 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 17:20:26.323917 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-19 17:20:26.323924 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-19 17:20:26.323932 | orchestrator | 2025-09-19 17:20:26.323940 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-19 17:20:26.323948 | orchestrator | Friday 19 September 2025 17:16:36 +0000 (0:00:00.932) 0:04:25.751 ****** 2025-09-19 17:20:26.323956 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:20:26.323968 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:20:26.323976 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:20:26.323985 | orchestrator | 2025-09-19 17:20:26.323993 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-19 17:20:26.324001 | orchestrator | Friday 19 September 2025 17:16:37 +0000 (0:00:00.496) 0:04:26.248 ****** 2025-09-19 17:20:26.324008 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:20:26.324020 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:20:26.324028 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:20:26.324036 | orchestrator | 2025-09-19 17:20:26.324061 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-19 17:20:26.324069 | orchestrator | Friday 19 September 2025 17:16:38 +0000 (0:00:00.959) 0:04:27.208 ****** 2025-09-19 17:20:26.324077 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 17:20:26.324085 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 17:20:26.324093 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 17:20:26.324101 | orchestrator | 2025-09-19 17:20:26.324109 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-19 17:20:26.324117 | orchestrator | Friday 19 September 2025 17:16:39 +0000 (0:00:01.198) 0:04:28.407 ****** 2025-09-19 17:20:26.324125 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 17:20:26.324133 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 17:20:26.324140 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 17:20:26.324148 | orchestrator | 2025-09-19 17:20:26.324156 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-19 17:20:26.324164 | orchestrator | Friday 19 September 2025 17:16:40 +0000 (0:00:01.191) 0:04:29.599 ****** 2025-09-19 17:20:26.324171 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-19 17:20:26.324179 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-19 17:20:26.324187 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-19 17:20:26.324195 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-19 17:20:26.324203 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-19 17:20:26.324210 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-19 17:20:26.324218 | orchestrator | 2025-09-19 17:20:26.324226 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-19 17:20:26.324234 | orchestrator | Friday 19 September 2025 17:16:44 +0000 (0:00:03.838) 0:04:33.437 ****** 2025-09-19 17:20:26.324242 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.324249 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.324257 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.324265 | orchestrator | 2025-09-19 17:20:26.324272 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-19 17:20:26.324280 | orchestrator | Friday 19 September 2025 17:16:44 +0000 (0:00:00.458) 0:04:33.896 ****** 2025-09-19 17:20:26.324288 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.324296 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.324303 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.324311 | orchestrator | 2025-09-19 17:20:26.324319 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-19 17:20:26.324327 | orchestrator | Friday 19 September 2025 17:16:45 +0000 (0:00:00.330) 0:04:34.227 ****** 2025-09-19 17:20:26.324334 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.324343 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.324350 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.324358 | orchestrator | 2025-09-19 17:20:26.324371 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-19 17:20:26.324380 | orchestrator | Friday 19 September 2025 17:16:46 +0000 (0:00:01.219) 0:04:35.447 ****** 2025-09-19 17:20:26.324388 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 17:20:26.324402 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 17:20:26.324410 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-19 17:20:26.324418 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 17:20:26.324426 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 17:20:26.324434 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-19 17:20:26.324442 | orchestrator | 2025-09-19 17:20:26.324450 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-19 17:20:26.324458 | orchestrator | Friday 19 September 2025 17:16:49 +0000 (0:00:03.327) 0:04:38.775 ****** 2025-09-19 17:20:26.324465 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:20:26.324473 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:20:26.324481 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:20:26.324488 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-19 17:20:26.324496 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.324504 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-19 17:20:26.324511 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.324519 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-19 17:20:26.324527 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.324535 | orchestrator | 2025-09-19 17:20:26.324543 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-19 17:20:26.324551 | orchestrator | Friday 19 September 2025 17:16:53 +0000 (0:00:03.567) 0:04:42.342 ****** 2025-09-19 17:20:26.324559 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.324566 | orchestrator | 2025-09-19 17:20:26.324574 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-19 17:20:26.324582 | orchestrator | Friday 19 September 2025 17:16:53 +0000 (0:00:00.135) 0:04:42.477 ****** 2025-09-19 17:20:26.324594 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.324602 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.324610 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.324617 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.324625 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.324633 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.324641 | orchestrator | 2025-09-19 17:20:26.324648 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-19 17:20:26.324656 | orchestrator | Friday 19 September 2025 17:16:54 +0000 (0:00:00.560) 0:04:43.037 ****** 2025-09-19 17:20:26.324664 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-19 17:20:26.324672 | orchestrator | 2025-09-19 17:20:26.324680 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-19 17:20:26.324687 | orchestrator | Friday 19 September 2025 17:16:54 +0000 (0:00:00.671) 0:04:43.709 ****** 2025-09-19 17:20:26.324695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.324703 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.324711 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.324718 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.324726 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.324733 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.324741 | orchestrator | 2025-09-19 17:20:26.324749 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-19 17:20:26.324757 | orchestrator | Friday 19 September 2025 17:16:55 +0000 (0:00:00.789) 0:04:44.498 ****** 2025-09-19 17:20:26.324765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324796 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324891 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324899 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324925 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.324933 | orchestrator | 2025-09-19 17:20:26.324942 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-19 17:20:26.324950 | orchestrator | Friday 19 September 2025 17:16:59 +0000 (0:00:03.763) 0:04:48.261 ****** 2025-09-19 17:20:26.324958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.324970 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.324984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.324993 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.325006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.325015 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.325027 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325036 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325147 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.325219 | orchestrator | 2025-09-19 17:20:26.325232 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-19 17:20:26.325240 | orchestrator | Friday 19 September 2025 17:17:05 +0000 (0:00:06.461) 0:04:54.722 ****** 2025-09-19 17:20:26.325248 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.325256 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.325264 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.325272 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.325282 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.325295 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.325303 | orchestrator | 2025-09-19 17:20:26.325311 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-19 17:20:26.325319 | orchestrator | Friday 19 September 2025 17:17:07 +0000 (0:00:01.306) 0:04:56.029 ****** 2025-09-19 17:20:26.325327 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 17:20:26.325335 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 17:20:26.325343 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 17:20:26.325351 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-19 17:20:26.325364 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 17:20:26.325372 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 17:20:26.325380 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.325388 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-19 17:20:26.325396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 17:20:26.325404 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.325411 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-19 17:20:26.325419 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.325427 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 17:20:26.325435 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 17:20:26.325443 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-19 17:20:26.325451 | orchestrator | 2025-09-19 17:20:26.325459 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-19 17:20:26.325466 | orchestrator | Friday 19 September 2025 17:17:10 +0000 (0:00:03.619) 0:04:59.649 ****** 2025-09-19 17:20:26.325474 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.325482 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.325496 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.325504 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.325512 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.325520 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.325527 | orchestrator | 2025-09-19 17:20:26.325535 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-19 17:20:26.325543 | orchestrator | Friday 19 September 2025 17:17:11 +0000 (0:00:00.630) 0:05:00.279 ****** 2025-09-19 17:20:26.325551 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 17:20:26.325559 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 17:20:26.325567 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 17:20:26.325574 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-19 17:20:26.325586 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 17:20:26.325595 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-19 17:20:26.325603 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325610 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325618 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325626 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325634 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325642 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.325650 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.325657 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-19 17:20:26.325665 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.325673 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325681 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325689 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325696 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325704 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325712 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-19 17:20:26.325720 | orchestrator | 2025-09-19 17:20:26.325728 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-19 17:20:26.325736 | orchestrator | Friday 19 September 2025 17:17:16 +0000 (0:00:05.639) 0:05:05.919 ****** 2025-09-19 17:20:26.325743 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:20:26.325751 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:20:26.325763 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-19 17:20:26.325771 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 17:20:26.325785 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 17:20:26.325793 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:20:26.325800 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-19 17:20:26.325809 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:20:26.325816 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-19 17:20:26.325824 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:20:26.325832 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:20:26.325840 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-19 17:20:26.325848 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 17:20:26.325856 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.325863 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 17:20:26.325872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.325880 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-19 17:20:26.325887 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.325895 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:20:26.325903 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:20:26.325911 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-19 17:20:26.325918 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:20:26.325927 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:20:26.325934 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-19 17:20:26.325942 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:20:26.325954 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:20:26.325962 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-19 17:20:26.325970 | orchestrator | 2025-09-19 17:20:26.325978 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-19 17:20:26.325986 | orchestrator | Friday 19 September 2025 17:17:23 +0000 (0:00:07.032) 0:05:12.951 ****** 2025-09-19 17:20:26.325993 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.326001 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.326009 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.326088 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326098 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326105 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326113 | orchestrator | 2025-09-19 17:20:26.326121 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-19 17:20:26.326129 | orchestrator | Friday 19 September 2025 17:17:24 +0000 (0:00:00.786) 0:05:13.738 ****** 2025-09-19 17:20:26.326137 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.326145 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.326152 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.326160 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326168 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326176 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326183 | orchestrator | 2025-09-19 17:20:26.326191 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-19 17:20:26.326205 | orchestrator | Friday 19 September 2025 17:17:25 +0000 (0:00:00.645) 0:05:14.383 ****** 2025-09-19 17:20:26.326213 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326220 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326228 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.326236 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326244 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.326251 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.326259 | orchestrator | 2025-09-19 17:20:26.326267 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-19 17:20:26.326275 | orchestrator | Friday 19 September 2025 17:17:27 +0000 (0:00:01.947) 0:05:16.331 ****** 2025-09-19 17:20:26.326289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.326298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.326306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326319 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.326328 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.326341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.326350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326358 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.326370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-19 17:20:26.326379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-19 17:20:26.326395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326404 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.326412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.326425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326433 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.326455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326464 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-19 17:20:26.326480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-19 17:20:26.326488 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326496 | orchestrator | 2025-09-19 17:20:26.326504 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-19 17:20:26.326512 | orchestrator | Friday 19 September 2025 17:17:28 +0000 (0:00:01.352) 0:05:17.683 ****** 2025-09-19 17:20:26.326520 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 17:20:26.326528 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326546 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.326554 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 17:20:26.326562 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326570 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.326577 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 17:20:26.326585 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326593 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.326601 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 17:20:26.326609 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326616 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326624 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 17:20:26.326631 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326638 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326644 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 17:20:26.326651 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 17:20:26.326658 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326664 | orchestrator | 2025-09-19 17:20:26.326671 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-19 17:20:26.326678 | orchestrator | Friday 19 September 2025 17:17:29 +0000 (0:00:00.809) 0:05:18.493 ****** 2025-09-19 17:20:26.326684 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326725 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326757 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326810 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-19 17:20:26.326821 | orchestrator | 2025-09-19 17:20:26.326828 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-19 17:20:26.326835 | orchestrator | Friday 19 September 2025 17:17:32 +0000 (0:00:02.768) 0:05:21.261 ****** 2025-09-19 17:20:26.326842 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.326848 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.326855 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.326862 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.326868 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.326875 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.326882 | orchestrator | 2025-09-19 17:20:26.326888 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326895 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.775) 0:05:22.037 ****** 2025-09-19 17:20:26.326901 | orchestrator | 2025-09-19 17:20:26.326908 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326914 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.141) 0:05:22.178 ****** 2025-09-19 17:20:26.326921 | orchestrator | 2025-09-19 17:20:26.326927 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326934 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.129) 0:05:22.307 ****** 2025-09-19 17:20:26.326941 | orchestrator | 2025-09-19 17:20:26.326948 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326958 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.130) 0:05:22.437 ****** 2025-09-19 17:20:26.326965 | orchestrator | 2025-09-19 17:20:26.326971 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326978 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.126) 0:05:22.564 ****** 2025-09-19 17:20:26.326985 | orchestrator | 2025-09-19 17:20:26.326991 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-19 17:20:26.326998 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.126) 0:05:22.690 ****** 2025-09-19 17:20:26.327004 | orchestrator | 2025-09-19 17:20:26.327011 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-19 17:20:26.327018 | orchestrator | Friday 19 September 2025 17:17:33 +0000 (0:00:00.276) 0:05:22.967 ****** 2025-09-19 17:20:26.327024 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.327031 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.327052 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.327059 | orchestrator | 2025-09-19 17:20:26.327066 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-19 17:20:26.327073 | orchestrator | Friday 19 September 2025 17:17:45 +0000 (0:00:11.553) 0:05:34.521 ****** 2025-09-19 17:20:26.327079 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.327086 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.327092 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.327099 | orchestrator | 2025-09-19 17:20:26.327106 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-19 17:20:26.327113 | orchestrator | Friday 19 September 2025 17:17:56 +0000 (0:00:11.446) 0:05:45.968 ****** 2025-09-19 17:20:26.327119 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.327126 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.327133 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.327139 | orchestrator | 2025-09-19 17:20:26.327146 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-19 17:20:26.327152 | orchestrator | Friday 19 September 2025 17:18:16 +0000 (0:00:19.080) 0:06:05.048 ****** 2025-09-19 17:20:26.327159 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.327166 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.327172 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.327179 | orchestrator | 2025-09-19 17:20:26.327185 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-19 17:20:26.327197 | orchestrator | Friday 19 September 2025 17:18:52 +0000 (0:00:36.261) 0:06:41.310 ****** 2025-09-19 17:20:26.327203 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.327210 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.327216 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.327223 | orchestrator | 2025-09-19 17:20:26.327230 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-19 17:20:26.327236 | orchestrator | Friday 19 September 2025 17:18:53 +0000 (0:00:01.022) 0:06:42.332 ****** 2025-09-19 17:20:26.327243 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.327249 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.327256 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.327263 | orchestrator | 2025-09-19 17:20:26.327269 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-19 17:20:26.327279 | orchestrator | Friday 19 September 2025 17:18:54 +0000 (0:00:00.801) 0:06:43.134 ****** 2025-09-19 17:20:26.327286 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:20:26.327292 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:20:26.327299 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:20:26.327306 | orchestrator | 2025-09-19 17:20:26.327313 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-19 17:20:26.327319 | orchestrator | Friday 19 September 2025 17:19:18 +0000 (0:00:24.573) 0:07:07.707 ****** 2025-09-19 17:20:26.327326 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.327333 | orchestrator | 2025-09-19 17:20:26.327339 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-19 17:20:26.327346 | orchestrator | Friday 19 September 2025 17:19:18 +0000 (0:00:00.119) 0:07:07.826 ****** 2025-09-19 17:20:26.327353 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.327359 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.327366 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.327372 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.327379 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.327385 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-19 17:20:26.327392 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:20:26.327399 | orchestrator | 2025-09-19 17:20:26.327405 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-19 17:20:26.327412 | orchestrator | Friday 19 September 2025 17:19:40 +0000 (0:00:22.100) 0:07:29.926 ****** 2025-09-19 17:20:26.327418 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.327425 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.327431 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.327438 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.327444 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.327451 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.327457 | orchestrator | 2025-09-19 17:20:26.327464 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-19 17:20:26.327471 | orchestrator | Friday 19 September 2025 17:19:48 +0000 (0:00:07.918) 0:07:37.845 ****** 2025-09-19 17:20:26.327477 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.327484 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.327491 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.327497 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.327504 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.327510 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-19 17:20:26.327517 | orchestrator | 2025-09-19 17:20:26.327523 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-19 17:20:26.327530 | orchestrator | Friday 19 September 2025 17:19:52 +0000 (0:00:03.324) 0:07:41.169 ****** 2025-09-19 17:20:26.327540 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:20:26.327551 | orchestrator | 2025-09-19 17:20:26.327558 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-19 17:20:26.327565 | orchestrator | Friday 19 September 2025 17:20:04 +0000 (0:00:12.384) 0:07:53.554 ****** 2025-09-19 17:20:26.327571 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:20:26.327578 | orchestrator | 2025-09-19 17:20:26.327585 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-19 17:20:26.327591 | orchestrator | Friday 19 September 2025 17:20:05 +0000 (0:00:01.312) 0:07:54.867 ****** 2025-09-19 17:20:26.327598 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.327605 | orchestrator | 2025-09-19 17:20:26.327611 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-19 17:20:26.327618 | orchestrator | Friday 19 September 2025 17:20:07 +0000 (0:00:01.282) 0:07:56.150 ****** 2025-09-19 17:20:26.327624 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:20:26.327631 | orchestrator | 2025-09-19 17:20:26.327638 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-19 17:20:26.327644 | orchestrator | Friday 19 September 2025 17:20:18 +0000 (0:00:11.431) 0:08:07.581 ****** 2025-09-19 17:20:26.327651 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:20:26.327657 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:20:26.327664 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:20:26.327671 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:20:26.327677 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:20:26.327684 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:20:26.327690 | orchestrator | 2025-09-19 17:20:26.327697 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-19 17:20:26.327703 | orchestrator | 2025-09-19 17:20:26.327710 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-19 17:20:26.327717 | orchestrator | Friday 19 September 2025 17:20:20 +0000 (0:00:01.788) 0:08:09.370 ****** 2025-09-19 17:20:26.327723 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:20:26.327730 | orchestrator | changed: [testbed-node-1] 2025-09-19 17:20:26.327736 | orchestrator | changed: [testbed-node-2] 2025-09-19 17:20:26.327743 | orchestrator | 2025-09-19 17:20:26.327750 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-19 17:20:26.327757 | orchestrator | 2025-09-19 17:20:26.327763 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-19 17:20:26.327770 | orchestrator | Friday 19 September 2025 17:20:21 +0000 (0:00:01.134) 0:08:10.505 ****** 2025-09-19 17:20:26.327777 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.327783 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.327790 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.327796 | orchestrator | 2025-09-19 17:20:26.327803 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-19 17:20:26.327809 | orchestrator | 2025-09-19 17:20:26.327816 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-19 17:20:26.327823 | orchestrator | Friday 19 September 2025 17:20:22 +0000 (0:00:00.517) 0:08:11.022 ****** 2025-09-19 17:20:26.327829 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-19 17:20:26.327839 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-19 17:20:26.327846 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-19 17:20:26.327852 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-19 17:20:26.327859 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-19 17:20:26.327865 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.327872 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:20:26.327879 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-19 17:20:26.327885 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-19 17:20:26.327892 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-19 17:20:26.327902 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-19 17:20:26.327909 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-19 17:20:26.327916 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.327922 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:20:26.327929 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-19 17:20:26.327935 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-19 17:20:26.327942 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-19 17:20:26.327948 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-19 17:20:26.327955 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-19 17:20:26.327961 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.327968 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:20:26.327975 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-19 17:20:26.327982 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-19 17:20:26.327988 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-19 17:20:26.327995 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-19 17:20:26.328001 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-19 17:20:26.328008 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.328015 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.328022 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-19 17:20:26.328028 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-19 17:20:26.328035 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-19 17:20:26.328054 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-19 17:20:26.328061 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-19 17:20:26.328072 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.328079 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.328085 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-19 17:20:26.328092 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-19 17:20:26.328098 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-19 17:20:26.328105 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-19 17:20:26.328111 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-19 17:20:26.328118 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-19 17:20:26.328124 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.328131 | orchestrator | 2025-09-19 17:20:26.328138 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-19 17:20:26.328144 | orchestrator | 2025-09-19 17:20:26.328151 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-19 17:20:26.328158 | orchestrator | Friday 19 September 2025 17:20:23 +0000 (0:00:01.366) 0:08:12.389 ****** 2025-09-19 17:20:26.328164 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-19 17:20:26.328171 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-19 17:20:26.328177 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.328184 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-19 17:20:26.328190 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-19 17:20:26.328197 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.328203 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-19 17:20:26.328210 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-19 17:20:26.328216 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.328232 | orchestrator | 2025-09-19 17:20:26.328238 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-19 17:20:26.328245 | orchestrator | 2025-09-19 17:20:26.328251 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-19 17:20:26.328258 | orchestrator | Friday 19 September 2025 17:20:24 +0000 (0:00:00.719) 0:08:13.109 ****** 2025-09-19 17:20:26.328264 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.328271 | orchestrator | 2025-09-19 17:20:26.328278 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-19 17:20:26.328284 | orchestrator | 2025-09-19 17:20:26.328291 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-19 17:20:26.328297 | orchestrator | Friday 19 September 2025 17:20:24 +0000 (0:00:00.654) 0:08:13.763 ****** 2025-09-19 17:20:26.328304 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:20:26.328310 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:20:26.328317 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:20:26.328323 | orchestrator | 2025-09-19 17:20:26.328330 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:20:26.328336 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:20:26.328347 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-19 17:20:26.328354 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 17:20:26.328361 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-19 17:20:26.328368 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-19 17:20:26.328374 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 17:20:26.328381 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-19 17:20:26.328388 | orchestrator | 2025-09-19 17:20:26.328394 | orchestrator | 2025-09-19 17:20:26.328401 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:20:26.328408 | orchestrator | Friday 19 September 2025 17:20:25 +0000 (0:00:00.415) 0:08:14.178 ****** 2025-09-19 17:20:26.328414 | orchestrator | =============================================================================== 2025-09-19 17:20:26.328421 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 36.26s 2025-09-19 17:20:26.328427 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.02s 2025-09-19 17:20:26.328434 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 24.57s 2025-09-19 17:20:26.328441 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.10s 2025-09-19 17:20:26.328447 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.34s 2025-09-19 17:20:26.328454 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.08s 2025-09-19 17:20:26.328460 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.47s 2025-09-19 17:20:26.328467 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.03s 2025-09-19 17:20:26.328473 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.70s 2025-09-19 17:20:26.328483 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.18s 2025-09-19 17:20:26.328490 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.83s 2025-09-19 17:20:26.328501 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.39s 2025-09-19 17:20:26.328508 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.27s 2025-09-19 17:20:26.328515 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.55s 2025-09-19 17:20:26.328521 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.45s 2025-09-19 17:20:26.328527 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.43s 2025-09-19 17:20:26.328534 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.84s 2025-09-19 17:20:26.328540 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.93s 2025-09-19 17:20:26.328547 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.92s 2025-09-19 17:20:26.328553 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 7.03s 2025-09-19 17:20:26.328560 | orchestrator | 2025-09-19 17:20:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:29.354479 | orchestrator | 2025-09-19 17:20:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:32.398833 | orchestrator | 2025-09-19 17:20:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:35.439272 | orchestrator | 2025-09-19 17:20:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:38.483694 | orchestrator | 2025-09-19 17:20:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:41.524241 | orchestrator | 2025-09-19 17:20:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:44.562925 | orchestrator | 2025-09-19 17:20:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:47.604608 | orchestrator | 2025-09-19 17:20:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:50.647009 | orchestrator | 2025-09-19 17:20:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:53.688727 | orchestrator | 2025-09-19 17:20:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:56.730957 | orchestrator | 2025-09-19 17:20:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:20:59.771188 | orchestrator | 2025-09-19 17:20:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:02.810585 | orchestrator | 2025-09-19 17:21:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:05.850249 | orchestrator | 2025-09-19 17:21:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:08.889660 | orchestrator | 2025-09-19 17:21:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:11.932926 | orchestrator | 2025-09-19 17:21:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:14.978453 | orchestrator | 2025-09-19 17:21:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:18.024496 | orchestrator | 2025-09-19 17:21:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:21.068016 | orchestrator | 2025-09-19 17:21:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:24.108116 | orchestrator | 2025-09-19 17:21:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-19 17:21:27.153588 | orchestrator | 2025-09-19 17:21:27.439161 | orchestrator | 2025-09-19 17:21:27.444139 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Fri Sep 19 17:21:27 UTC 2025 2025-09-19 17:21:27.444194 | orchestrator | 2025-09-19 17:21:27.929220 | orchestrator | ok: Runtime: 0:33:48.783368 2025-09-19 17:21:28.175604 | 2025-09-19 17:21:28.175755 | TASK [Bootstrap services] 2025-09-19 17:21:28.846661 | orchestrator | 2025-09-19 17:21:28.846839 | orchestrator | # BOOTSTRAP 2025-09-19 17:21:28.846861 | orchestrator | 2025-09-19 17:21:28.846875 | orchestrator | + set -e 2025-09-19 17:21:28.846888 | orchestrator | + echo 2025-09-19 17:21:28.846902 | orchestrator | + echo '# BOOTSTRAP' 2025-09-19 17:21:28.846919 | orchestrator | + echo 2025-09-19 17:21:28.846966 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-19 17:21:28.859965 | orchestrator | + set -e 2025-09-19 17:21:28.860075 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-19 17:21:33.179414 | orchestrator | 2025-09-19 17:21:33 | INFO  | It takes a moment until task 4f71e230-6487-4e35-bb71-39a96469ca76 (flavor-manager) has been started and output is visible here. 2025-09-19 17:21:41.367370 | orchestrator | 2025-09-19 17:21:36 | INFO  | Flavor SCS-1L-1 created 2025-09-19 17:21:41.367494 | orchestrator | 2025-09-19 17:21:36 | INFO  | Flavor SCS-1L-1-5 created 2025-09-19 17:21:41.367509 | orchestrator | 2025-09-19 17:21:37 | INFO  | Flavor SCS-1V-2 created 2025-09-19 17:21:41.367519 | orchestrator | 2025-09-19 17:21:37 | INFO  | Flavor SCS-1V-2-5 created 2025-09-19 17:21:41.367529 | orchestrator | 2025-09-19 17:21:37 | INFO  | Flavor SCS-1V-4 created 2025-09-19 17:21:41.367539 | orchestrator | 2025-09-19 17:21:37 | INFO  | Flavor SCS-1V-4-10 created 2025-09-19 17:21:41.367549 | orchestrator | 2025-09-19 17:21:37 | INFO  | Flavor SCS-1V-8 created 2025-09-19 17:21:41.367559 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-1V-8-20 created 2025-09-19 17:21:41.367580 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-4 created 2025-09-19 17:21:41.367590 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-4-10 created 2025-09-19 17:21:41.367600 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-8 created 2025-09-19 17:21:41.367610 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-8-20 created 2025-09-19 17:21:41.367620 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-16 created 2025-09-19 17:21:41.367629 | orchestrator | 2025-09-19 17:21:38 | INFO  | Flavor SCS-2V-16-50 created 2025-09-19 17:21:41.367639 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-8 created 2025-09-19 17:21:41.367648 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-8-20 created 2025-09-19 17:21:41.367658 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-16 created 2025-09-19 17:21:41.367668 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-16-50 created 2025-09-19 17:21:41.367677 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-32 created 2025-09-19 17:21:41.367687 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-4V-32-100 created 2025-09-19 17:21:41.367697 | orchestrator | 2025-09-19 17:21:39 | INFO  | Flavor SCS-8V-16 created 2025-09-19 17:21:41.367706 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-8V-16-50 created 2025-09-19 17:21:41.367716 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-8V-32 created 2025-09-19 17:21:41.367726 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-8V-32-100 created 2025-09-19 17:21:41.367736 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-16V-32 created 2025-09-19 17:21:41.367745 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-16V-32-100 created 2025-09-19 17:21:41.367755 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-2V-4-20s created 2025-09-19 17:21:41.367765 | orchestrator | 2025-09-19 17:21:40 | INFO  | Flavor SCS-4V-8-50s created 2025-09-19 17:21:41.367774 | orchestrator | 2025-09-19 17:21:41 | INFO  | Flavor SCS-8V-32-100s created 2025-09-19 17:21:43.560225 | orchestrator | 2025-09-19 17:21:43 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-09-19 17:21:53.772936 | orchestrator | 2025-09-19 17:21:53 | INFO  | Task eb970a5d-3db6-446e-b5d8-00deb77b7d01 (bootstrap-basic) was prepared for execution. 2025-09-19 17:21:53.773049 | orchestrator | 2025-09-19 17:21:53 | INFO  | It takes a moment until task eb970a5d-3db6-446e-b5d8-00deb77b7d01 (bootstrap-basic) has been started and output is visible here. 2025-09-19 17:22:57.035095 | orchestrator | 2025-09-19 17:22:57.035301 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-09-19 17:22:57.035319 | orchestrator | 2025-09-19 17:22:57.035331 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-19 17:22:57.035342 | orchestrator | Friday 19 September 2025 17:21:57 +0000 (0:00:00.074) 0:00:00.074 ****** 2025-09-19 17:22:57.035353 | orchestrator | ok: [localhost] 2025-09-19 17:22:57.035364 | orchestrator | 2025-09-19 17:22:57.035375 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-09-19 17:22:57.035386 | orchestrator | Friday 19 September 2025 17:22:00 +0000 (0:00:02.827) 0:00:02.901 ****** 2025-09-19 17:22:57.035396 | orchestrator | ok: [localhost] 2025-09-19 17:22:57.035407 | orchestrator | 2025-09-19 17:22:57.035418 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-09-19 17:22:57.035429 | orchestrator | Friday 19 September 2025 17:22:08 +0000 (0:00:07.943) 0:00:10.845 ****** 2025-09-19 17:22:57.035439 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035451 | orchestrator | 2025-09-19 17:22:57.035461 | orchestrator | TASK [Get volume type local] *************************************************** 2025-09-19 17:22:57.035472 | orchestrator | Friday 19 September 2025 17:22:16 +0000 (0:00:07.700) 0:00:18.546 ****** 2025-09-19 17:22:57.035483 | orchestrator | ok: [localhost] 2025-09-19 17:22:57.035494 | orchestrator | 2025-09-19 17:22:57.035504 | orchestrator | TASK [Create volume type local] ************************************************ 2025-09-19 17:22:57.035515 | orchestrator | Friday 19 September 2025 17:22:23 +0000 (0:00:06.914) 0:00:25.460 ****** 2025-09-19 17:22:57.035530 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035541 | orchestrator | 2025-09-19 17:22:57.035551 | orchestrator | TASK [Create public network] *************************************************** 2025-09-19 17:22:57.035562 | orchestrator | Friday 19 September 2025 17:22:29 +0000 (0:00:06.494) 0:00:31.955 ****** 2025-09-19 17:22:57.035573 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035583 | orchestrator | 2025-09-19 17:22:57.035594 | orchestrator | TASK [Set public network to default] ******************************************* 2025-09-19 17:22:57.035605 | orchestrator | Friday 19 September 2025 17:22:36 +0000 (0:00:06.977) 0:00:38.933 ****** 2025-09-19 17:22:57.035615 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035626 | orchestrator | 2025-09-19 17:22:57.035636 | orchestrator | TASK [Create public subnet] **************************************************** 2025-09-19 17:22:57.035664 | orchestrator | Friday 19 September 2025 17:22:43 +0000 (0:00:07.172) 0:00:46.105 ****** 2025-09-19 17:22:57.035676 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035686 | orchestrator | 2025-09-19 17:22:57.035697 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-09-19 17:22:57.035708 | orchestrator | Friday 19 September 2025 17:22:48 +0000 (0:00:05.183) 0:00:51.289 ****** 2025-09-19 17:22:57.035718 | orchestrator | changed: [localhost] 2025-09-19 17:22:57.035729 | orchestrator | 2025-09-19 17:22:57.035740 | orchestrator | TASK [Create manager role] ***************************************************** 2025-09-19 17:22:57.035750 | orchestrator | Friday 19 September 2025 17:22:53 +0000 (0:00:04.281) 0:00:55.570 ****** 2025-09-19 17:22:57.035761 | orchestrator | ok: [localhost] 2025-09-19 17:22:57.035771 | orchestrator | 2025-09-19 17:22:57.035782 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:22:57.035793 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:22:57.035804 | orchestrator | 2025-09-19 17:22:57.035815 | orchestrator | 2025-09-19 17:22:57.035826 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:22:57.035857 | orchestrator | Friday 19 September 2025 17:22:56 +0000 (0:00:03.517) 0:00:59.088 ****** 2025-09-19 17:22:57.035869 | orchestrator | =============================================================================== 2025-09-19 17:22:57.035879 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.94s 2025-09-19 17:22:57.035890 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.70s 2025-09-19 17:22:57.035901 | orchestrator | Set public network to default ------------------------------------------- 7.17s 2025-09-19 17:22:57.035911 | orchestrator | Create public network --------------------------------------------------- 6.98s 2025-09-19 17:22:57.035922 | orchestrator | Get volume type local --------------------------------------------------- 6.91s 2025-09-19 17:22:57.035933 | orchestrator | Create volume type local ------------------------------------------------ 6.49s 2025-09-19 17:22:57.035944 | orchestrator | Create public subnet ---------------------------------------------------- 5.18s 2025-09-19 17:22:57.035954 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.28s 2025-09-19 17:22:57.035965 | orchestrator | Create manager role ----------------------------------------------------- 3.52s 2025-09-19 17:22:57.035976 | orchestrator | Gathering Facts --------------------------------------------------------- 2.83s 2025-09-19 17:22:59.370993 | orchestrator | 2025-09-19 17:22:59 | INFO  | It takes a moment until task 88ad5ab9-2ede-47bf-a13a-53f07797447e (image-manager) has been started and output is visible here. 2025-09-19 17:23:39.318867 | orchestrator | 2025-09-19 17:23:02 | INFO  | Processing image 'Cirros 0.6.2' 2025-09-19 17:23:39.318977 | orchestrator | 2025-09-19 17:23:02 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-09-19 17:23:39.318995 | orchestrator | 2025-09-19 17:23:02 | INFO  | Importing image Cirros 0.6.2 2025-09-19 17:23:39.319006 | orchestrator | 2025-09-19 17:23:02 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 17:23:39.319017 | orchestrator | 2025-09-19 17:23:04 | INFO  | Waiting for image to leave queued state... 2025-09-19 17:23:39.319028 | orchestrator | 2025-09-19 17:23:06 | INFO  | Waiting for import to complete... 2025-09-19 17:23:39.319038 | orchestrator | 2025-09-19 17:23:16 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-09-19 17:23:39.319048 | orchestrator | 2025-09-19 17:23:16 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-09-19 17:23:39.319058 | orchestrator | 2025-09-19 17:23:16 | INFO  | Setting internal_version = 0.6.2 2025-09-19 17:23:39.319068 | orchestrator | 2025-09-19 17:23:16 | INFO  | Setting image_original_user = cirros 2025-09-19 17:23:39.319078 | orchestrator | 2025-09-19 17:23:16 | INFO  | Adding tag os:cirros 2025-09-19 17:23:39.319088 | orchestrator | 2025-09-19 17:23:17 | INFO  | Setting property architecture: x86_64 2025-09-19 17:23:39.319098 | orchestrator | 2025-09-19 17:23:17 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 17:23:39.319108 | orchestrator | 2025-09-19 17:23:17 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 17:23:39.319118 | orchestrator | 2025-09-19 17:23:17 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 17:23:39.319128 | orchestrator | 2025-09-19 17:23:17 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 17:23:39.319138 | orchestrator | 2025-09-19 17:23:18 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 17:23:39.319147 | orchestrator | 2025-09-19 17:23:18 | INFO  | Setting property os_distro: cirros 2025-09-19 17:23:39.319157 | orchestrator | 2025-09-19 17:23:18 | INFO  | Setting property os_purpose: minimal 2025-09-19 17:23:39.319201 | orchestrator | 2025-09-19 17:23:18 | INFO  | Setting property replace_frequency: never 2025-09-19 17:23:39.319232 | orchestrator | 2025-09-19 17:23:18 | INFO  | Setting property uuid_validity: none 2025-09-19 17:23:39.319242 | orchestrator | 2025-09-19 17:23:19 | INFO  | Setting property provided_until: none 2025-09-19 17:23:39.319258 | orchestrator | 2025-09-19 17:23:19 | INFO  | Setting property image_description: Cirros 2025-09-19 17:23:39.319272 | orchestrator | 2025-09-19 17:23:19 | INFO  | Setting property image_name: Cirros 2025-09-19 17:23:39.319282 | orchestrator | 2025-09-19 17:23:19 | INFO  | Setting property internal_version: 0.6.2 2025-09-19 17:23:39.319292 | orchestrator | 2025-09-19 17:23:20 | INFO  | Setting property image_original_user: cirros 2025-09-19 17:23:39.319301 | orchestrator | 2025-09-19 17:23:20 | INFO  | Setting property os_version: 0.6.2 2025-09-19 17:23:39.319311 | orchestrator | 2025-09-19 17:23:20 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-09-19 17:23:39.319322 | orchestrator | 2025-09-19 17:23:20 | INFO  | Setting property image_build_date: 2023-05-30 2025-09-19 17:23:39.319332 | orchestrator | 2025-09-19 17:23:20 | INFO  | Checking status of 'Cirros 0.6.2' 2025-09-19 17:23:39.319341 | orchestrator | 2025-09-19 17:23:20 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-09-19 17:23:39.319351 | orchestrator | 2025-09-19 17:23:20 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-09-19 17:23:39.319361 | orchestrator | 2025-09-19 17:23:21 | INFO  | Processing image 'Cirros 0.6.3' 2025-09-19 17:23:39.319370 | orchestrator | 2025-09-19 17:23:21 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-09-19 17:23:39.319380 | orchestrator | 2025-09-19 17:23:21 | INFO  | Importing image Cirros 0.6.3 2025-09-19 17:23:39.319392 | orchestrator | 2025-09-19 17:23:21 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 17:23:39.319403 | orchestrator | 2025-09-19 17:23:22 | INFO  | Waiting for image to leave queued state... 2025-09-19 17:23:39.319413 | orchestrator | 2025-09-19 17:23:24 | INFO  | Waiting for import to complete... 2025-09-19 17:23:39.319441 | orchestrator | 2025-09-19 17:23:34 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-09-19 17:23:39.319453 | orchestrator | 2025-09-19 17:23:34 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-09-19 17:23:39.319464 | orchestrator | 2025-09-19 17:23:34 | INFO  | Setting internal_version = 0.6.3 2025-09-19 17:23:39.319475 | orchestrator | 2025-09-19 17:23:34 | INFO  | Setting image_original_user = cirros 2025-09-19 17:23:39.319486 | orchestrator | 2025-09-19 17:23:34 | INFO  | Adding tag os:cirros 2025-09-19 17:23:39.319497 | orchestrator | 2025-09-19 17:23:35 | INFO  | Setting property architecture: x86_64 2025-09-19 17:23:39.319508 | orchestrator | 2025-09-19 17:23:35 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 17:23:39.319519 | orchestrator | 2025-09-19 17:23:35 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 17:23:39.319530 | orchestrator | 2025-09-19 17:23:35 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 17:23:39.319540 | orchestrator | 2025-09-19 17:23:35 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 17:23:39.319551 | orchestrator | 2025-09-19 17:23:36 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 17:23:39.319562 | orchestrator | 2025-09-19 17:23:36 | INFO  | Setting property os_distro: cirros 2025-09-19 17:23:39.319581 | orchestrator | 2025-09-19 17:23:36 | INFO  | Setting property os_purpose: minimal 2025-09-19 17:23:39.319593 | orchestrator | 2025-09-19 17:23:36 | INFO  | Setting property replace_frequency: never 2025-09-19 17:23:39.319604 | orchestrator | 2025-09-19 17:23:36 | INFO  | Setting property uuid_validity: none 2025-09-19 17:23:39.319615 | orchestrator | 2025-09-19 17:23:37 | INFO  | Setting property provided_until: none 2025-09-19 17:23:39.319626 | orchestrator | 2025-09-19 17:23:37 | INFO  | Setting property image_description: Cirros 2025-09-19 17:23:39.319637 | orchestrator | 2025-09-19 17:23:37 | INFO  | Setting property image_name: Cirros 2025-09-19 17:23:39.319647 | orchestrator | 2025-09-19 17:23:37 | INFO  | Setting property internal_version: 0.6.3 2025-09-19 17:23:39.319657 | orchestrator | 2025-09-19 17:23:37 | INFO  | Setting property image_original_user: cirros 2025-09-19 17:23:39.319667 | orchestrator | 2025-09-19 17:23:38 | INFO  | Setting property os_version: 0.6.3 2025-09-19 17:23:39.319676 | orchestrator | 2025-09-19 17:23:38 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-09-19 17:23:39.319686 | orchestrator | 2025-09-19 17:23:38 | INFO  | Setting property image_build_date: 2024-09-26 2025-09-19 17:23:39.319700 | orchestrator | 2025-09-19 17:23:38 | INFO  | Checking status of 'Cirros 0.6.3' 2025-09-19 17:23:39.319710 | orchestrator | 2025-09-19 17:23:38 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-09-19 17:23:39.319720 | orchestrator | 2025-09-19 17:23:38 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-09-19 17:23:39.601543 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-09-19 17:23:41.706334 | orchestrator | 2025-09-19 17:23:41 | INFO  | date: 2025-09-19 2025-09-19 17:23:41.706436 | orchestrator | 2025-09-19 17:23:41 | INFO  | image: octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 17:23:41.706454 | orchestrator | 2025-09-19 17:23:41 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 17:23:41.706489 | orchestrator | 2025-09-19 17:23:41 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2.CHECKSUM 2025-09-19 17:23:41.735506 | orchestrator | 2025-09-19 17:23:41 | INFO  | checksum: cb1f8a9bf0aeb0e92074b04499e688b0043001241167a8bf8df49931cc66885f 2025-09-19 17:23:41.799470 | orchestrator | 2025-09-19 17:23:41 | INFO  | It takes a moment until task 1c2eeafe-effe-487a-914d-3d23952e3484 (image-manager) has been started and output is visible here. 2025-09-19 17:24:43.091256 | orchestrator | 2025-09-19 17:23:44 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 17:24:43.091374 | orchestrator | 2025-09-19 17:23:44 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2: 200 2025-09-19 17:24:43.091395 | orchestrator | 2025-09-19 17:23:44 | INFO  | Importing image OpenStack Octavia Amphora 2025-09-19 2025-09-19 17:24:43.091408 | orchestrator | 2025-09-19 17:23:44 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 17:24:43.091421 | orchestrator | 2025-09-19 17:23:45 | INFO  | Waiting for image to leave queued state... 2025-09-19 17:24:43.091432 | orchestrator | 2025-09-19 17:23:47 | INFO  | Waiting for import to complete... 2025-09-19 17:24:43.091468 | orchestrator | 2025-09-19 17:23:57 | INFO  | Waiting for import to complete... 2025-09-19 17:24:43.091480 | orchestrator | 2025-09-19 17:24:07 | INFO  | Waiting for import to complete... 2025-09-19 17:24:43.091491 | orchestrator | 2025-09-19 17:24:18 | INFO  | Waiting for import to complete... 2025-09-19 17:24:43.091501 | orchestrator | 2025-09-19 17:24:28 | INFO  | Waiting for import to complete... 2025-09-19 17:24:43.091512 | orchestrator | 2025-09-19 17:24:38 | INFO  | Import of 'OpenStack Octavia Amphora 2025-09-19' successfully completed, reloading images 2025-09-19 17:24:43.091524 | orchestrator | 2025-09-19 17:24:38 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 17:24:43.091535 | orchestrator | 2025-09-19 17:24:38 | INFO  | Setting internal_version = 2025-09-19 2025-09-19 17:24:43.091546 | orchestrator | 2025-09-19 17:24:38 | INFO  | Setting image_original_user = ubuntu 2025-09-19 17:24:43.091557 | orchestrator | 2025-09-19 17:24:38 | INFO  | Adding tag amphora 2025-09-19 17:24:43.091568 | orchestrator | 2025-09-19 17:24:38 | INFO  | Adding tag os:ubuntu 2025-09-19 17:24:43.091579 | orchestrator | 2025-09-19 17:24:39 | INFO  | Setting property architecture: x86_64 2025-09-19 17:24:43.091589 | orchestrator | 2025-09-19 17:24:39 | INFO  | Setting property hw_disk_bus: scsi 2025-09-19 17:24:43.091600 | orchestrator | 2025-09-19 17:24:39 | INFO  | Setting property hw_rng_model: virtio 2025-09-19 17:24:43.091610 | orchestrator | 2025-09-19 17:24:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-09-19 17:24:43.091634 | orchestrator | 2025-09-19 17:24:39 | INFO  | Setting property hw_watchdog_action: reset 2025-09-19 17:24:43.091645 | orchestrator | 2025-09-19 17:24:40 | INFO  | Setting property hypervisor_type: qemu 2025-09-19 17:24:43.091656 | orchestrator | 2025-09-19 17:24:40 | INFO  | Setting property os_distro: ubuntu 2025-09-19 17:24:43.091669 | orchestrator | 2025-09-19 17:24:40 | INFO  | Setting property replace_frequency: quarterly 2025-09-19 17:24:43.091681 | orchestrator | 2025-09-19 17:24:40 | INFO  | Setting property uuid_validity: last-1 2025-09-19 17:24:43.091694 | orchestrator | 2025-09-19 17:24:40 | INFO  | Setting property provided_until: none 2025-09-19 17:24:43.091705 | orchestrator | 2025-09-19 17:24:41 | INFO  | Setting property os_purpose: network 2025-09-19 17:24:43.091718 | orchestrator | 2025-09-19 17:24:41 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-09-19 17:24:43.091731 | orchestrator | 2025-09-19 17:24:41 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-09-19 17:24:43.091743 | orchestrator | 2025-09-19 17:24:41 | INFO  | Setting property internal_version: 2025-09-19 2025-09-19 17:24:43.091755 | orchestrator | 2025-09-19 17:24:41 | INFO  | Setting property image_original_user: ubuntu 2025-09-19 17:24:43.091768 | orchestrator | 2025-09-19 17:24:42 | INFO  | Setting property os_version: 2025-09-19 2025-09-19 17:24:43.091781 | orchestrator | 2025-09-19 17:24:42 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250919.qcow2 2025-09-19 17:24:43.091793 | orchestrator | 2025-09-19 17:24:42 | INFO  | Setting property image_build_date: 2025-09-19 2025-09-19 17:24:43.091806 | orchestrator | 2025-09-19 17:24:42 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 17:24:43.091819 | orchestrator | 2025-09-19 17:24:42 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-09-19' 2025-09-19 17:24:43.091856 | orchestrator | 2025-09-19 17:24:42 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-09-19 17:24:43.091869 | orchestrator | 2025-09-19 17:24:42 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-09-19 17:24:43.091883 | orchestrator | 2025-09-19 17:24:42 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-09-19 17:24:43.091897 | orchestrator | 2025-09-19 17:24:42 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-09-19 17:24:43.836633 | orchestrator | ok: Runtime: 0:03:14.951244 2025-09-19 17:24:43.903555 | 2025-09-19 17:24:43.903687 | TASK [Run checks] 2025-09-19 17:24:44.568994 | orchestrator | + set -e 2025-09-19 17:24:44.569240 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 17:24:44.569278 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 17:24:44.569301 | orchestrator | ++ INTERACTIVE=false 2025-09-19 17:24:44.569315 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 17:24:44.569327 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 17:24:44.569341 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 17:24:44.570773 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 17:24:44.575549 | orchestrator | 2025-09-19 17:24:44.575607 | orchestrator | # CHECK 2025-09-19 17:24:44.575619 | orchestrator | 2025-09-19 17:24:44.575631 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 17:24:44.575646 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 17:24:44.575657 | orchestrator | + echo 2025-09-19 17:24:44.575668 | orchestrator | + echo '# CHECK' 2025-09-19 17:24:44.575679 | orchestrator | + echo 2025-09-19 17:24:44.575694 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 17:24:44.576701 | orchestrator | ++ semver latest 5.0.0 2025-09-19 17:24:44.655603 | orchestrator | 2025-09-19 17:24:44.655694 | orchestrator | ## Containers @ testbed-manager 2025-09-19 17:24:44.655708 | orchestrator | 2025-09-19 17:24:44.655721 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 17:24:44.655733 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 17:24:44.655743 | orchestrator | + echo 2025-09-19 17:24:44.655755 | orchestrator | + echo '## Containers @ testbed-manager' 2025-09-19 17:24:44.655766 | orchestrator | + echo 2025-09-19 17:24:44.655777 | orchestrator | + osism container testbed-manager ps 2025-09-19 17:24:47.071005 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 17:24:47.071148 | orchestrator | 8043e19e4c80 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2025-09-19 17:24:47.071215 | orchestrator | bf5e9e94ce09 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2025-09-19 17:24:47.071237 | orchestrator | a7ab41643bfa registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-09-19 17:24:47.071249 | orchestrator | 99151448b80e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-09-19 17:24:47.071261 | orchestrator | d07f765d8ecc registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-09-19 17:24:47.071277 | orchestrator | 6643420f0fef registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-09-19 17:24:47.071289 | orchestrator | 014c60e6bf33 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 17:24:47.071301 | orchestrator | a7fdd8d60a1e registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 29 minutes kolla_toolbox 2025-09-19 17:24:47.071313 | orchestrator | d32d803b11ec registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 17:24:47.071351 | orchestrator | 893f2870f242 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-09-19 17:24:47.071364 | orchestrator | b093e7d970ff registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 31 minutes ago Up 31 minutes openstackclient 2025-09-19 17:24:47.071376 | orchestrator | 7b956d285db5 registry.osism.tech/osism/homer:v25.08.1 "/bin/sh /entrypoint…" 31 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-09-19 17:24:47.071388 | orchestrator | 1dbae08d217c registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-09-19 17:24:47.071400 | orchestrator | 46fdec6c08eb registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 37 minutes (healthy) manager-inventory_reconciler-1 2025-09-19 17:24:47.071412 | orchestrator | a65f0e501346 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-09-19 17:24:47.071447 | orchestrator | f553c438f486 registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) osism-ansible 2025-09-19 17:24:47.071465 | orchestrator | 314dafb3036a registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-09-19 17:24:47.071477 | orchestrator | d0a77e2febb7 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 58 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-09-19 17:24:47.071490 | orchestrator | aae8e3843b09 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 58 minutes ago Up 38 minutes (healthy) 8000/tcp manager-ara-server-1 2025-09-19 17:24:47.071502 | orchestrator | d8ad30435e3e registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 58 minutes ago Up 38 minutes (healthy) 6379/tcp manager-redis-1 2025-09-19 17:24:47.071514 | orchestrator | ea3fa61d5b4f registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-flower-1 2025-09-19 17:24:47.071526 | orchestrator | 678fb4952981 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-listener-1 2025-09-19 17:24:47.071538 | orchestrator | d650d40f0f67 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-openstack-1 2025-09-19 17:24:47.071550 | orchestrator | aacdde5121e6 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" 58 minutes ago Up 38 minutes (healthy) 3306/tcp manager-mariadb-1 2025-09-19 17:24:47.071571 | orchestrator | 283fa946ed0a registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) manager-beat-1 2025-09-19 17:24:47.071583 | orchestrator | 82ab22745073 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 58 minutes ago Up 38 minutes (healthy) osismclient 2025-09-19 17:24:47.071596 | orchestrator | 690a6084e9f2 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 58 minutes ago Up 38 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-09-19 17:24:47.071608 | orchestrator | 01754a3473d9 registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" 58 minutes ago Up 38 minutes 192.168.16.5:3000->3000/tcp osism-frontend 2025-09-19 17:24:47.071620 | orchestrator | d0b805b99241 registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-09-19 17:24:47.426776 | orchestrator | 2025-09-19 17:24:47.426896 | orchestrator | ## Images @ testbed-manager 2025-09-19 17:24:47.426914 | orchestrator | 2025-09-19 17:24:47.426927 | orchestrator | + echo 2025-09-19 17:24:47.426938 | orchestrator | + echo '## Images @ testbed-manager' 2025-09-19 17:24:47.426950 | orchestrator | + echo 2025-09-19 17:24:47.426961 | orchestrator | + osism container testbed-manager images 2025-09-19 17:24:49.755960 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 17:24:49.756058 | orchestrator | registry.osism.tech/osism/osism-ansible latest f96a8a84ca6e 7 hours ago 594MB 2025-09-19 17:24:49.756069 | orchestrator | registry.osism.tech/osism/osism latest caf71a42605c 8 hours ago 325MB 2025-09-19 17:24:49.756097 | orchestrator | registry.osism.tech/osism/osism-frontend latest 0e15c54d8d9c 8 hours ago 236MB 2025-09-19 17:24:49.756120 | orchestrator | registry.osism.tech/osism/homer v25.08.1 8c383e1d56e2 14 hours ago 11.5MB 2025-09-19 17:24:49.756139 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 84cc807d7f93 14 hours ago 243MB 2025-09-19 17:24:49.756147 | orchestrator | registry.osism.tech/osism/cephclient reef 89fec8934dce 14 hours ago 453MB 2025-09-19 17:24:49.756155 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 16 hours ago 320MB 2025-09-19 17:24:49.756163 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 16 hours ago 631MB 2025-09-19 17:24:49.756198 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 16 hours ago 748MB 2025-09-19 17:24:49.756206 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 3e7c6c197ac3 16 hours ago 459MB 2025-09-19 17:24:49.756213 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 4d720d677fef 16 hours ago 363MB 2025-09-19 17:24:49.756220 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 16 hours ago 412MB 2025-09-19 17:24:49.756228 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 84f15bd5d79b 16 hours ago 894MB 2025-09-19 17:24:49.756235 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 16 hours ago 360MB 2025-09-19 17:24:49.756243 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 9f643559a7a5 17 hours ago 589MB 2025-09-19 17:24:49.756305 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest b451f465ea51 17 hours ago 1.22GB 2025-09-19 17:24:49.756312 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 2fa59ab2ac91 17 hours ago 543MB 2025-09-19 17:24:49.756508 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 013533981ce6 17 hours ago 315MB 2025-09-19 17:24:49.756521 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 3 weeks ago 275MB 2025-09-19 17:24:49.756528 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.3 48f7ae354376 6 weeks ago 329MB 2025-09-19 17:24:49.756535 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 8 weeks ago 226MB 2025-09-19 17:24:49.756543 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine f218e591b571 2 months ago 41.4MB 2025-09-19 17:24:49.756550 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 7 months ago 571MB 2025-09-19 17:24:49.756557 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 15 months ago 146MB 2025-09-19 17:24:50.048547 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 17:24:50.049032 | orchestrator | ++ semver latest 5.0.0 2025-09-19 17:24:50.100251 | orchestrator | 2025-09-19 17:24:50.100359 | orchestrator | ## Containers @ testbed-node-0 2025-09-19 17:24:50.100374 | orchestrator | 2025-09-19 17:24:50.100386 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 17:24:50.100397 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 17:24:50.100408 | orchestrator | + echo 2025-09-19 17:24:50.100419 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-09-19 17:24:50.100431 | orchestrator | + echo 2025-09-19 17:24:50.100442 | orchestrator | + osism container testbed-node-0 ps 2025-09-19 17:24:52.429149 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 17:24:52.429268 | orchestrator | 89c90ba7d7af registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-19 17:24:52.429282 | orchestrator | dae946ec3fd3 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 17:24:52.429293 | orchestrator | 3e544da64487 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 17:24:52.429302 | orchestrator | 76f29f732cb3 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-09-19 17:24:52.429312 | orchestrator | c5187ec67201 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-09-19 17:24:52.429344 | orchestrator | 4d112ea309ae registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-09-19 17:24:52.429355 | orchestrator | 8a7409b1f524 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-09-19 17:24:52.429364 | orchestrator | b15f4a007183 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 17:24:52.429374 | orchestrator | 96ff203abf51 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-09-19 17:24:52.429384 | orchestrator | ab506379f8a5 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-19 17:24:52.429415 | orchestrator | ff1253f99314 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-09-19 17:24:52.429425 | orchestrator | f617eeb4c4fe registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-09-19 17:24:52.429434 | orchestrator | c03da62a6da0 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-09-19 17:24:52.429444 | orchestrator | 9eef14dccc2d registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-19 17:24:52.429454 | orchestrator | e37024393339 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-19 17:24:52.429463 | orchestrator | fbddb9db7e6b registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-09-19 17:24:52.429473 | orchestrator | ecce1b322f80 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-19 17:24:52.429482 | orchestrator | 90789913defd registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-19 17:24:52.429492 | orchestrator | 64caac40b512 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-19 17:24:52.429502 | orchestrator | 510faa3e0298 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-19 17:24:52.429512 | orchestrator | 2bbb5fb700bc registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_central 2025-09-19 17:24:52.429539 | orchestrator | 65d63e8fbad6 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-19 17:24:52.429556 | orchestrator | 44a440bdb185 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-19 17:24:52.429574 | orchestrator | 126ddb4984f3 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-19 17:24:52.429600 | orchestrator | d3ea02797995 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-19 17:24:52.429618 | orchestrator | 76ee6d3cb153 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-19 17:24:52.429641 | orchestrator | 7bb48c268027 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-0 2025-09-19 17:24:52.429658 | orchestrator | 5e400279efd8 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-19 17:24:52.429674 | orchestrator | 7e6f6082fa1f registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-19 17:24:52.429691 | orchestrator | d8db54d16761 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 17:24:52.429720 | orchestrator | 54abbec05eea registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-09-19 17:24:52.429738 | orchestrator | eeeb1c844cea registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-09-19 17:24:52.429755 | orchestrator | a1b151b96871 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-09-19 17:24:52.429773 | orchestrator | 5277089250c9 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-19 17:24:52.429792 | orchestrator | 88433d537b9c registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-09-19 17:24:52.429811 | orchestrator | bd6a54d7450a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-19 17:24:52.429830 | orchestrator | dcf303674b02 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 17:24:52.429848 | orchestrator | 2903c39dd1e5 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 17:24:52.429865 | orchestrator | e1890a1251da registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-09-19 17:24:52.429882 | orchestrator | 304ea9429f2e registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-09-19 17:24:52.429901 | orchestrator | 4fe5011766f9 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-09-19 17:24:52.429919 | orchestrator | 8d5220122f3a registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 27 minutes ago Up 27 minutes ceph-mon-testbed-node-0 2025-09-19 17:24:52.429936 | orchestrator | 5d3498d6527f registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-09-19 17:24:52.429954 | orchestrator | 00dce7a44b53 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-09-19 17:24:52.429985 | orchestrator | 2cea8e24d4c3 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 17:24:52.430006 | orchestrator | 7a4c79d6e977 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-19 17:24:52.430073 | orchestrator | 5931e23efd41 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-19 17:24:52.430085 | orchestrator | ed47e9191230 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-19 17:24:52.430096 | orchestrator | 856856dff424 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 17:24:52.430115 | orchestrator | a079c39d33a1 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 17:24:52.430136 | orchestrator | 4ce0aa0b12c1 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-19 17:24:52.430147 | orchestrator | f8ebdb8847d6 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-09-19 17:24:52.729624 | orchestrator | 2025-09-19 17:24:52.729671 | orchestrator | ## Images @ testbed-node-0 2025-09-19 17:24:52.729676 | orchestrator | 2025-09-19 17:24:52.729680 | orchestrator | + echo 2025-09-19 17:24:52.729684 | orchestrator | + echo '## Images @ testbed-node-0' 2025-09-19 17:24:52.729688 | orchestrator | + echo 2025-09-19 17:24:52.729692 | orchestrator | + osism container testbed-node-0 images 2025-09-19 17:24:55.164784 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 17:24:55.164866 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 14 hours ago 1.27GB 2025-09-19 17:24:55.164877 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 16 hours ago 321MB 2025-09-19 17:24:55.164887 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 16 hours ago 1.59GB 2025-09-19 17:24:55.164896 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 16 hours ago 1.56GB 2025-09-19 17:24:55.164905 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 16 hours ago 420MB 2025-09-19 17:24:55.164915 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 16 hours ago 320MB 2025-09-19 17:24:55.164924 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 16 hours ago 377MB 2025-09-19 17:24:55.164933 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 16 hours ago 631MB 2025-09-19 17:24:55.164942 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 16 hours ago 331MB 2025-09-19 17:24:55.164951 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 16 hours ago 328MB 2025-09-19 17:24:55.164960 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 16 hours ago 1.05GB 2025-09-19 17:24:55.164970 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 16 hours ago 748MB 2025-09-19 17:24:55.164978 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 16 hours ago 356MB 2025-09-19 17:24:55.164987 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 16 hours ago 412MB 2025-09-19 17:24:55.164998 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 16 hours ago 347MB 2025-09-19 17:24:55.165007 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 16 hours ago 353MB 2025-09-19 17:24:55.165016 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 16 hours ago 360MB 2025-09-19 17:24:55.165025 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 16 hours ago 327MB 2025-09-19 17:24:55.165047 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 16 hours ago 327MB 2025-09-19 17:24:55.165057 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 16 hours ago 364MB 2025-09-19 17:24:55.165066 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 16 hours ago 364MB 2025-09-19 17:24:55.165075 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 16 hours ago 593MB 2025-09-19 17:24:55.165083 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 16 hours ago 1.21GB 2025-09-19 17:24:55.165111 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 16 hours ago 949MB 2025-09-19 17:24:55.165121 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 16 hours ago 949MB 2025-09-19 17:24:55.165130 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 16 hours ago 949MB 2025-09-19 17:24:55.165139 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 16 hours ago 949MB 2025-09-19 17:24:55.165148 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 619fa8ab46ad 16 hours ago 1.04GB 2025-09-19 17:24:55.165157 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 387ffb26bd8e 16 hours ago 1.04GB 2025-09-19 17:24:55.165166 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 16 hours ago 1.11GB 2025-09-19 17:24:55.165212 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 16 hours ago 1.16GB 2025-09-19 17:24:55.165222 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 16 hours ago 1.11GB 2025-09-19 17:24:55.165231 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 16 hours ago 1.25GB 2025-09-19 17:24:55.165240 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 16 hours ago 1.3GB 2025-09-19 17:24:55.165249 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 16 hours ago 1.42GB 2025-09-19 17:24:55.165258 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 16 hours ago 1.3GB 2025-09-19 17:24:55.165286 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 16 hours ago 1.3GB 2025-09-19 17:24:55.165296 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 16 hours ago 1.2GB 2025-09-19 17:24:55.165305 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 16 hours ago 1.31GB 2025-09-19 17:24:55.165314 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 16 hours ago 1.41GB 2025-09-19 17:24:55.165324 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 16 hours ago 1.41GB 2025-09-19 17:24:55.165333 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 16 hours ago 1.15GB 2025-09-19 17:24:55.165342 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 16 hours ago 1.04GB 2025-09-19 17:24:55.165351 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 16 hours ago 1.06GB 2025-09-19 17:24:55.165361 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 16 hours ago 1.06GB 2025-09-19 17:24:55.165372 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 16 hours ago 1.06GB 2025-09-19 17:24:55.165381 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 16 hours ago 1.06GB 2025-09-19 17:24:55.165391 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 16 hours ago 1.05GB 2025-09-19 17:24:55.165401 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 16 hours ago 1.05GB 2025-09-19 17:24:55.165410 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 16 hours ago 1.05GB 2025-09-19 17:24:55.165419 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 16 hours ago 1.06GB 2025-09-19 17:24:55.165429 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 16 hours ago 1.05GB 2025-09-19 17:24:55.165445 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 9b2119f96562 16 hours ago 1.04GB 2025-09-19 17:24:55.165454 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 06200c7f46a8 16 hours ago 1.04GB 2025-09-19 17:24:55.165462 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 ccbc5d4c2242 16 hours ago 1.04GB 2025-09-19 17:24:55.165471 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 d4bc84d0863f 16 hours ago 1.04GB 2025-09-19 17:24:55.165479 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 ddeb761bf282 16 hours ago 1.12GB 2025-09-19 17:24:55.165488 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 30edefb98e4a 16 hours ago 1.11GB 2025-09-19 17:24:55.165496 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 f84f7ee7f274 16 hours ago 1.1GB 2025-09-19 17:24:55.165505 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 3796447736e1 16 hours ago 1.12GB 2025-09-19 17:24:55.165513 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 7807fcc26d86 16 hours ago 1.1GB 2025-09-19 17:24:55.165522 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 5e883a748a98 16 hours ago 1.1GB 2025-09-19 17:24:55.165531 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 fb223671de61 16 hours ago 1.12GB 2025-09-19 17:24:55.450569 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 17:24:55.450890 | orchestrator | ++ semver latest 5.0.0 2025-09-19 17:24:55.501498 | orchestrator | 2025-09-19 17:24:55.501588 | orchestrator | ## Containers @ testbed-node-1 2025-09-19 17:24:55.501605 | orchestrator | 2025-09-19 17:24:55.501617 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 17:24:55.501627 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 17:24:55.501638 | orchestrator | + echo 2025-09-19 17:24:55.501650 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-09-19 17:24:55.501661 | orchestrator | + echo 2025-09-19 17:24:55.501672 | orchestrator | + osism container testbed-node-1 ps 2025-09-19 17:24:57.626900 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 17:24:57.626998 | orchestrator | c6022140af57 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-19 17:24:57.627013 | orchestrator | 51484955779b registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 17:24:57.627025 | orchestrator | d8977ade7658 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 17:24:57.627036 | orchestrator | 28632dc39103 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 17:24:57.627047 | orchestrator | fd041d1f729e registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-19 17:24:57.627075 | orchestrator | dd7c7c82a027 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-09-19 17:24:57.627087 | orchestrator | 3f954cd8dfc9 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-09-19 17:24:57.627098 | orchestrator | aab0b3d712e9 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 17:24:57.627108 | orchestrator | ca1d28846a67 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-09-19 17:24:57.627142 | orchestrator | 189fa8d0e846 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-19 17:24:57.627154 | orchestrator | 33995ec9f45f registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-09-19 17:24:57.627165 | orchestrator | 8386b58de6aa registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-09-19 17:24:57.627203 | orchestrator | 1117a51a7aaf registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-09-19 17:24:57.627220 | orchestrator | 2a5edd3d6bb6 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-19 17:24:57.627240 | orchestrator | 0bb19d9d25d0 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-19 17:24:57.627252 | orchestrator | a0fac1b84318 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) neutron_server 2025-09-19 17:24:57.627263 | orchestrator | 435f135121b8 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-19 17:24:57.627274 | orchestrator | cefc7ee867fa registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-19 17:24:57.627290 | orchestrator | bd204a88fb91 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-19 17:24:57.627301 | orchestrator | 6a5ab7b8d9ec registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-19 17:24:57.627312 | orchestrator | 2e20443faa4f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-19 17:24:57.627340 | orchestrator | 1b04997ab583 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-19 17:24:57.627351 | orchestrator | e422d7ddbb09 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-19 17:24:57.627362 | orchestrator | ff0b4e4c8894 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-19 17:24:57.627373 | orchestrator | 6dd4d68552d1 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-19 17:24:57.627384 | orchestrator | 2275707d600c registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-19 17:24:57.627401 | orchestrator | 151dcbe65be5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-1 2025-09-19 17:24:57.627412 | orchestrator | 35461fe5539a registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-19 17:24:57.627431 | orchestrator | b19e9a0b4db6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-19 17:24:57.627442 | orchestrator | c0a3754ac87f registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-19 17:24:57.627453 | orchestrator | 0313c12aaf28 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 17:24:57.627463 | orchestrator | e0efe7dc910a registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 17:24:57.627474 | orchestrator | edd33c87b483 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-19 17:24:57.627485 | orchestrator | 11c3602595f1 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-19 17:24:57.627496 | orchestrator | 2dc4954a7b9d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-09-19 17:24:57.627506 | orchestrator | 104cc079e156 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-19 17:24:57.627517 | orchestrator | 37219243a1bf registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 17:24:57.627527 | orchestrator | e92aefd36b90 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-09-19 17:24:57.627538 | orchestrator | 8fc66b064e83 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-09-19 17:24:57.627549 | orchestrator | b0138a05ebc1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-09-19 17:24:57.627559 | orchestrator | 7a621e2ca3d3 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 17:24:57.627570 | orchestrator | 54771672b762 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-19 17:24:57.627580 | orchestrator | 3f37631ec61e registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-19 17:24:57.627591 | orchestrator | a3ca191c43a0 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-09-19 17:24:57.627608 | orchestrator | 273c2fe2ac53 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-09-19 17:24:57.627619 | orchestrator | f5a991d742c7 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-19 17:24:57.627630 | orchestrator | 302eb376fcbb registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-19 17:24:57.627641 | orchestrator | 3033d532a89c registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-19 17:24:57.627658 | orchestrator | 067796b66936 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 17:24:57.627669 | orchestrator | 4af88d8c3b20 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-09-19 17:24:57.627684 | orchestrator | 8a85a4303e8d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-19 17:24:57.627695 | orchestrator | 73a58c2da78e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 17:24:57.823502 | orchestrator | 2025-09-19 17:24:57.823615 | orchestrator | ## Images @ testbed-node-1 2025-09-19 17:24:57.823638 | orchestrator | 2025-09-19 17:24:57.823655 | orchestrator | + echo 2025-09-19 17:24:57.823674 | orchestrator | + echo '## Images @ testbed-node-1' 2025-09-19 17:24:57.823691 | orchestrator | + echo 2025-09-19 17:24:57.823706 | orchestrator | + osism container testbed-node-1 images 2025-09-19 17:24:59.923816 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 17:24:59.923940 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 14 hours ago 1.27GB 2025-09-19 17:24:59.923975 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 16 hours ago 321MB 2025-09-19 17:24:59.923987 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 16 hours ago 1.59GB 2025-09-19 17:24:59.923998 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 16 hours ago 1.56GB 2025-09-19 17:24:59.924009 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 16 hours ago 420MB 2025-09-19 17:24:59.924065 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 16 hours ago 320MB 2025-09-19 17:24:59.924078 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 16 hours ago 377MB 2025-09-19 17:24:59.924089 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 16 hours ago 631MB 2025-09-19 17:24:59.924100 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 16 hours ago 331MB 2025-09-19 17:24:59.924111 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 16 hours ago 328MB 2025-09-19 17:24:59.924121 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 16 hours ago 1.05GB 2025-09-19 17:24:59.924132 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 16 hours ago 748MB 2025-09-19 17:24:59.924142 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 16 hours ago 356MB 2025-09-19 17:24:59.924153 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 16 hours ago 412MB 2025-09-19 17:24:59.924164 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 16 hours ago 347MB 2025-09-19 17:24:59.924213 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 16 hours ago 353MB 2025-09-19 17:24:59.924225 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 16 hours ago 360MB 2025-09-19 17:24:59.924236 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 16 hours ago 327MB 2025-09-19 17:24:59.924246 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 16 hours ago 327MB 2025-09-19 17:24:59.924257 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 16 hours ago 364MB 2025-09-19 17:24:59.924267 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 16 hours ago 364MB 2025-09-19 17:24:59.924302 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 16 hours ago 593MB 2025-09-19 17:24:59.924313 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 16 hours ago 1.21GB 2025-09-19 17:24:59.924323 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 16 hours ago 949MB 2025-09-19 17:24:59.924334 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 16 hours ago 949MB 2025-09-19 17:24:59.924344 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 16 hours ago 949MB 2025-09-19 17:24:59.924357 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 16 hours ago 949MB 2025-09-19 17:24:59.924369 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 16 hours ago 1.11GB 2025-09-19 17:24:59.924381 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 16 hours ago 1.16GB 2025-09-19 17:24:59.924393 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 16 hours ago 1.11GB 2025-09-19 17:24:59.924405 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 16 hours ago 1.25GB 2025-09-19 17:24:59.924417 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 16 hours ago 1.3GB 2025-09-19 17:24:59.924429 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 16 hours ago 1.42GB 2025-09-19 17:24:59.924442 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 16 hours ago 1.3GB 2025-09-19 17:24:59.924453 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 16 hours ago 1.3GB 2025-09-19 17:24:59.924466 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 16 hours ago 1.2GB 2025-09-19 17:24:59.924495 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 16 hours ago 1.31GB 2025-09-19 17:24:59.924508 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 16 hours ago 1.41GB 2025-09-19 17:24:59.924520 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 16 hours ago 1.41GB 2025-09-19 17:24:59.924540 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 16 hours ago 1.15GB 2025-09-19 17:24:59.924552 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 16 hours ago 1.04GB 2025-09-19 17:24:59.924564 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 16 hours ago 1.06GB 2025-09-19 17:24:59.924577 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 16 hours ago 1.06GB 2025-09-19 17:24:59.924589 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 16 hours ago 1.06GB 2025-09-19 17:24:59.924601 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 16 hours ago 1.06GB 2025-09-19 17:24:59.924614 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 16 hours ago 1.05GB 2025-09-19 17:24:59.924625 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 16 hours ago 1.05GB 2025-09-19 17:24:59.924636 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 16 hours ago 1.05GB 2025-09-19 17:24:59.924647 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 16 hours ago 1.06GB 2025-09-19 17:24:59.924658 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 16 hours ago 1.05GB 2025-09-19 17:25:00.113749 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-09-19 17:25:00.113859 | orchestrator | ++ semver latest 5.0.0 2025-09-19 17:25:00.157736 | orchestrator | 2025-09-19 17:25:00.157823 | orchestrator | ## Containers @ testbed-node-2 2025-09-19 17:25:00.157837 | orchestrator | 2025-09-19 17:25:00.157849 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 17:25:00.157860 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 17:25:00.157871 | orchestrator | + echo 2025-09-19 17:25:00.157882 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-09-19 17:25:00.157894 | orchestrator | + echo 2025-09-19 17:25:00.157905 | orchestrator | + osism container testbed-node-2 ps 2025-09-19 17:25:02.316232 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-09-19 17:25:02.316333 | orchestrator | 0334c0b6005b registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-09-19 17:25:02.316349 | orchestrator | 6866eded9fbf registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-09-19 17:25:02.316361 | orchestrator | 72700593e4c6 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-09-19 17:25:02.316373 | orchestrator | 403bf7940174 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-09-19 17:25:02.316384 | orchestrator | e50b1796cf0b registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-09-19 17:25:02.316396 | orchestrator | 13be6658f8aa registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) glance_api 2025-09-19 17:25:02.316407 | orchestrator | e4809d4fca07 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-09-19 17:25:02.316418 | orchestrator | b065c59231c9 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-09-19 17:25:02.316429 | orchestrator | 28d5c9541a27 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-09-19 17:25:02.316441 | orchestrator | ba4e9289de08 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-09-19 17:25:02.316452 | orchestrator | 7f934c582caa registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-09-19 17:25:02.316463 | orchestrator | c95c7a15ca6c registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-09-19 17:25:02.316475 | orchestrator | 4df0331f109e registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-09-19 17:25:02.316486 | orchestrator | 1d7115ade273 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-09-19 17:25:02.316497 | orchestrator | 38f2e02841bb registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-09-19 17:25:02.316508 | orchestrator | f3df2bffdd08 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 13 minutes (healthy) neutron_server 2025-09-19 17:25:02.316548 | orchestrator | 8c15c80cade5 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_worker 2025-09-19 17:25:02.316560 | orchestrator | 5a389de77034 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_mdns 2025-09-19 17:25:02.316571 | orchestrator | b2680ba238e5 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-09-19 17:25:02.317274 | orchestrator | 4d56fa8162f7 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) designate_producer 2025-09-19 17:25:02.317327 | orchestrator | e384794cc815 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-09-19 17:25:02.317339 | orchestrator | 1cf8002797ab registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-09-19 17:25:02.317351 | orchestrator | 7f0675aada49 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-09-19 17:25:02.317362 | orchestrator | 0773fb13c76d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-09-19 17:25:02.317373 | orchestrator | 7ffcd3c664bf registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-09-19 17:25:02.317384 | orchestrator | ab7296e798e5 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_api 2025-09-19 17:25:02.317394 | orchestrator | 21faf804d447 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 15 minutes ago Up 15 minutes ceph-mgr-testbed-node-2 2025-09-19 17:25:02.317405 | orchestrator | 9e4fd155d52b registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-09-19 17:25:02.317416 | orchestrator | 3d05c8133c67 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-09-19 17:25:02.317427 | orchestrator | 6b5dbeda80c1 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-09-19 17:25:02.317438 | orchestrator | 5e929df8735c registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-09-19 17:25:02.317449 | orchestrator | ad28c1729391 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-09-19 17:25:02.317460 | orchestrator | 9cf6b6ad8a07 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-09-19 17:25:02.317470 | orchestrator | a480dc5231ff registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-09-19 17:25:02.317481 | orchestrator | 2ba970a2ad5e registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-09-19 17:25:02.317492 | orchestrator | 7fa761c57bd9 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-09-19 17:25:02.317516 | orchestrator | 065f4d9fd1b9 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-09-19 17:25:02.317528 | orchestrator | 0087f6e65315 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-09-19 17:25:02.317539 | orchestrator | fc45dc0ff9a8 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-09-19 17:25:02.317550 | orchestrator | 752b7f76ae01 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-09-19 17:25:02.317560 | orchestrator | e23a2f835aec registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-09-19 17:25:02.317571 | orchestrator | eb48b10f3de7 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-09-19 17:25:02.317592 | orchestrator | c58200f880f5 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-09-19 17:25:02.317604 | orchestrator | cb6e046541cb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-09-19 17:25:02.317615 | orchestrator | c6e1f738ca5f registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-09-19 17:25:02.317626 | orchestrator | fc123fd15a95 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-09-19 17:25:02.317636 | orchestrator | d4b9791fb599 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-09-19 17:25:02.317647 | orchestrator | fb18126b1cdf registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-09-19 17:25:02.317658 | orchestrator | b2a1783b9ecf registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-09-19 17:25:02.317668 | orchestrator | 1251ebc19ae2 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-09-19 17:25:02.317679 | orchestrator | fbb7ba655277 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-09-19 17:25:02.317693 | orchestrator | 44e5bc7c4888 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-09-19 17:25:02.625155 | orchestrator | 2025-09-19 17:25:02.625312 | orchestrator | ## Images @ testbed-node-2 2025-09-19 17:25:02.625337 | orchestrator | 2025-09-19 17:25:02.625355 | orchestrator | + echo 2025-09-19 17:25:02.625374 | orchestrator | + echo '## Images @ testbed-node-2' 2025-09-19 17:25:02.625394 | orchestrator | + echo 2025-09-19 17:25:02.625412 | orchestrator | + osism container testbed-node-2 images 2025-09-19 17:25:04.917988 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-09-19 17:25:04.918146 | orchestrator | registry.osism.tech/osism/ceph-daemon reef e5544776978f 14 hours ago 1.27GB 2025-09-19 17:25:04.918237 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 f1db521913fc 16 hours ago 321MB 2025-09-19 17:25:04.918272 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 8188ac43bfc9 16 hours ago 1.59GB 2025-09-19 17:25:04.918284 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 97c04a33606a 16 hours ago 1.56GB 2025-09-19 17:25:04.918295 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 a49525aaa7c8 16 hours ago 420MB 2025-09-19 17:25:04.918305 | orchestrator | registry.osism.tech/kolla/cron 2024.2 704de7ec9f25 16 hours ago 320MB 2025-09-19 17:25:04.918316 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 01693a8e2538 16 hours ago 377MB 2025-09-19 17:25:04.918327 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 1adafff72696 16 hours ago 631MB 2025-09-19 17:25:04.918337 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 db3bd122416e 16 hours ago 331MB 2025-09-19 17:25:04.918349 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 d6dab43ba5a0 16 hours ago 328MB 2025-09-19 17:25:04.918367 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 3c9521a5ec98 16 hours ago 1.05GB 2025-09-19 17:25:04.918387 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 88da420ad3bb 16 hours ago 748MB 2025-09-19 17:25:04.918407 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 3dc19243d77e 16 hours ago 356MB 2025-09-19 17:25:04.918427 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 bb9bb451bb9e 16 hours ago 412MB 2025-09-19 17:25:04.918446 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 a7082db9abd9 16 hours ago 347MB 2025-09-19 17:25:04.918463 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 67f3232669cb 16 hours ago 353MB 2025-09-19 17:25:04.918475 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 3a7a363e61d4 16 hours ago 360MB 2025-09-19 17:25:04.918485 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 af547c4efd0a 16 hours ago 327MB 2025-09-19 17:25:04.918496 | orchestrator | registry.osism.tech/kolla/redis 2024.2 e8d42a6f6117 16 hours ago 327MB 2025-09-19 17:25:04.918506 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 3962eb463fa6 16 hours ago 364MB 2025-09-19 17:25:04.918517 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 a7e1d4e47ed5 16 hours ago 364MB 2025-09-19 17:25:04.918529 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 ee281b442e34 16 hours ago 593MB 2025-09-19 17:25:04.918542 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 7ad0c090b6ff 16 hours ago 1.21GB 2025-09-19 17:25:04.918554 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 c1cc8c6d6e0b 16 hours ago 949MB 2025-09-19 17:25:04.918566 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 00a30ff3320e 16 hours ago 949MB 2025-09-19 17:25:04.918579 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 87e1722d6fde 16 hours ago 949MB 2025-09-19 17:25:04.918591 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 cea5532286f5 16 hours ago 949MB 2025-09-19 17:25:04.918604 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 b940b00e6d28 16 hours ago 1.11GB 2025-09-19 17:25:04.918616 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 d8d2bdbcdfc8 16 hours ago 1.16GB 2025-09-19 17:25:04.918628 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 c96704a81666 16 hours ago 1.11GB 2025-09-19 17:25:04.918641 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 f879e1c6c1ac 16 hours ago 1.25GB 2025-09-19 17:25:04.918653 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 26c8527840b5 16 hours ago 1.3GB 2025-09-19 17:25:04.918674 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 46560f333102 16 hours ago 1.42GB 2025-09-19 17:25:04.918686 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 0bbf0d830f2c 16 hours ago 1.3GB 2025-09-19 17:25:04.918698 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 5be7024268fb 16 hours ago 1.3GB 2025-09-19 17:25:04.918711 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 e1644a6555a9 16 hours ago 1.2GB 2025-09-19 17:25:04.918742 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 1606d013ebc6 16 hours ago 1.31GB 2025-09-19 17:25:04.918755 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 ffb397cc0bd3 16 hours ago 1.41GB 2025-09-19 17:25:04.918767 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 4bd40dedaba8 16 hours ago 1.41GB 2025-09-19 17:25:04.918780 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 b71b58e951f1 16 hours ago 1.15GB 2025-09-19 17:25:04.918792 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 fa2b88da8bbb 16 hours ago 1.04GB 2025-09-19 17:25:04.918805 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 6f510c22ea81 16 hours ago 1.06GB 2025-09-19 17:25:04.918817 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 31c40f05f58a 16 hours ago 1.06GB 2025-09-19 17:25:04.918829 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 47b1a92e4b1f 16 hours ago 1.06GB 2025-09-19 17:25:04.918842 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 a1b2f088447a 16 hours ago 1.06GB 2025-09-19 17:25:04.918854 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 1d0aefc5bb6e 16 hours ago 1.05GB 2025-09-19 17:25:04.918866 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 1259920e02c4 16 hours ago 1.05GB 2025-09-19 17:25:04.918879 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 a20f3aaf2468 16 hours ago 1.05GB 2025-09-19 17:25:04.918890 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 76e17c28b726 16 hours ago 1.06GB 2025-09-19 17:25:04.918901 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 3fc356e89cf7 16 hours ago 1.05GB 2025-09-19 17:25:05.193422 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-09-19 17:25:05.200641 | orchestrator | + set -e 2025-09-19 17:25:05.200679 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 17:25:05.202119 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 17:25:05.202142 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 17:25:05.202153 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 17:25:05.202164 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 17:25:05.202205 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 17:25:05.202218 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 17:25:05.202229 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 17:25:05.202240 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 17:25:05.202251 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 17:25:05.202262 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 17:25:05.202273 | orchestrator | ++ export ARA=false 2025-09-19 17:25:05.202283 | orchestrator | ++ ARA=false 2025-09-19 17:25:05.202294 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 17:25:05.202305 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 17:25:05.202315 | orchestrator | ++ export TEMPEST=false 2025-09-19 17:25:05.202326 | orchestrator | ++ TEMPEST=false 2025-09-19 17:25:05.202336 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 17:25:05.202347 | orchestrator | ++ IS_ZUUL=true 2025-09-19 17:25:05.202357 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 17:25:05.202368 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 17:25:05.202379 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 17:25:05.202389 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 17:25:05.202399 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 17:25:05.202410 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 17:25:05.202420 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 17:25:05.202456 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 17:25:05.202467 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 17:25:05.202477 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 17:25:05.202488 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-19 17:25:05.202499 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-09-19 17:25:05.211977 | orchestrator | + set -e 2025-09-19 17:25:05.212018 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 17:25:05.212030 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 17:25:05.212040 | orchestrator | ++ INTERACTIVE=false 2025-09-19 17:25:05.212051 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 17:25:05.212061 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 17:25:05.212276 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 17:25:05.213364 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 17:25:05.218544 | orchestrator | 2025-09-19 17:25:05.218584 | orchestrator | # Ceph status 2025-09-19 17:25:05.218597 | orchestrator | 2025-09-19 17:25:05.218609 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 17:25:05.218621 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 17:25:05.218633 | orchestrator | + echo 2025-09-19 17:25:05.218645 | orchestrator | + echo '# Ceph status' 2025-09-19 17:25:05.218656 | orchestrator | + echo 2025-09-19 17:25:05.218668 | orchestrator | + ceph -s 2025-09-19 17:25:05.807965 | orchestrator | cluster: 2025-09-19 17:25:05.808065 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-09-19 17:25:05.808080 | orchestrator | health: HEALTH_OK 2025-09-19 17:25:05.808092 | orchestrator | 2025-09-19 17:25:05.808103 | orchestrator | services: 2025-09-19 17:25:05.808114 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-09-19 17:25:05.808126 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-09-19 17:25:05.808138 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-09-19 17:25:05.808148 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2025-09-19 17:25:05.808160 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-09-19 17:25:05.808170 | orchestrator | 2025-09-19 17:25:05.808229 | orchestrator | data: 2025-09-19 17:25:05.808241 | orchestrator | volumes: 1/1 healthy 2025-09-19 17:25:05.808251 | orchestrator | pools: 14 pools, 417 pgs 2025-09-19 17:25:05.808262 | orchestrator | objects: 524 objects, 2.2 GiB 2025-09-19 17:25:05.808273 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-09-19 17:25:05.808285 | orchestrator | pgs: 417 active+clean 2025-09-19 17:25:05.808295 | orchestrator | 2025-09-19 17:25:05.854965 | orchestrator | 2025-09-19 17:25:05.855036 | orchestrator | # Ceph versions 2025-09-19 17:25:05.855048 | orchestrator | 2025-09-19 17:25:05.855060 | orchestrator | + echo 2025-09-19 17:25:05.855071 | orchestrator | + echo '# Ceph versions' 2025-09-19 17:25:05.855082 | orchestrator | + echo 2025-09-19 17:25:05.855092 | orchestrator | + ceph versions 2025-09-19 17:25:06.441649 | orchestrator | { 2025-09-19 17:25:06.441730 | orchestrator | "mon": { 2025-09-19 17:25:06.441744 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 17:25:06.441756 | orchestrator | }, 2025-09-19 17:25:06.441767 | orchestrator | "mgr": { 2025-09-19 17:25:06.441778 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 17:25:06.441789 | orchestrator | }, 2025-09-19 17:25:06.441799 | orchestrator | "osd": { 2025-09-19 17:25:06.441810 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-09-19 17:25:06.441821 | orchestrator | }, 2025-09-19 17:25:06.441832 | orchestrator | "mds": { 2025-09-19 17:25:06.441842 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 17:25:06.441853 | orchestrator | }, 2025-09-19 17:25:06.441864 | orchestrator | "rgw": { 2025-09-19 17:25:06.441875 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-09-19 17:25:06.441886 | orchestrator | }, 2025-09-19 17:25:06.441896 | orchestrator | "overall": { 2025-09-19 17:25:06.441907 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-09-19 17:25:06.441918 | orchestrator | } 2025-09-19 17:25:06.441929 | orchestrator | } 2025-09-19 17:25:06.495876 | orchestrator | 2025-09-19 17:25:06.495956 | orchestrator | # Ceph OSD tree 2025-09-19 17:25:06.495970 | orchestrator | 2025-09-19 17:25:06.495982 | orchestrator | + echo 2025-09-19 17:25:06.496017 | orchestrator | + echo '# Ceph OSD tree' 2025-09-19 17:25:06.496029 | orchestrator | + echo 2025-09-19 17:25:06.496039 | orchestrator | + ceph osd df tree 2025-09-19 17:25:07.015310 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-09-19 17:25:07.015421 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 425 MiB 113 GiB 5.91 1.00 - root default 2025-09-19 17:25:07.015446 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-09-19 17:25:07.015464 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.77 1.31 207 up osd.2 2025-09-19 17:25:07.015481 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 832 MiB 763 MiB 1 KiB 70 MiB 19 GiB 4.07 0.69 201 up osd.4 2025-09-19 17:25:07.015499 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-4 2025-09-19 17:25:07.015521 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.50 1.10 183 up osd.0 2025-09-19 17:25:07.015540 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1019 MiB 1 KiB 70 MiB 19 GiB 5.32 0.90 221 up osd.3 2025-09-19 17:25:07.015565 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-09-19 17:25:07.015577 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 70 MiB 19 GiB 6.53 1.10 222 up osd.1 2025-09-19 17:25:07.015588 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1011 MiB 1 KiB 74 MiB 19 GiB 5.30 0.90 184 up osd.5 2025-09-19 17:25:07.015598 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 425 MiB 113 GiB 5.91 2025-09-19 17:25:07.015609 | orchestrator | MIN/MAX VAR: 0.69/1.31 STDDEV: 1.18 2025-09-19 17:25:07.059067 | orchestrator | 2025-09-19 17:25:07.059135 | orchestrator | # Ceph monitor status 2025-09-19 17:25:07.059148 | orchestrator | 2025-09-19 17:25:07.059159 | orchestrator | + echo 2025-09-19 17:25:07.059171 | orchestrator | + echo '# Ceph monitor status' 2025-09-19 17:25:07.059205 | orchestrator | + echo 2025-09-19 17:25:07.059217 | orchestrator | + ceph mon stat 2025-09-19 17:25:07.656322 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-09-19 17:25:07.694919 | orchestrator | 2025-09-19 17:25:07.695000 | orchestrator | # Ceph quorum status 2025-09-19 17:25:07.695013 | orchestrator | 2025-09-19 17:25:07.695021 | orchestrator | + echo 2025-09-19 17:25:07.695031 | orchestrator | + echo '# Ceph quorum status' 2025-09-19 17:25:07.695041 | orchestrator | + echo 2025-09-19 17:25:07.695703 | orchestrator | + ceph quorum_status 2025-09-19 17:25:07.695736 | orchestrator | + jq 2025-09-19 17:25:08.381667 | orchestrator | { 2025-09-19 17:25:08.381741 | orchestrator | "election_epoch": 8, 2025-09-19 17:25:08.381753 | orchestrator | "quorum": [ 2025-09-19 17:25:08.381762 | orchestrator | 0, 2025-09-19 17:25:08.381771 | orchestrator | 1, 2025-09-19 17:25:08.381779 | orchestrator | 2 2025-09-19 17:25:08.381788 | orchestrator | ], 2025-09-19 17:25:08.381796 | orchestrator | "quorum_names": [ 2025-09-19 17:25:08.381805 | orchestrator | "testbed-node-0", 2025-09-19 17:25:08.381814 | orchestrator | "testbed-node-1", 2025-09-19 17:25:08.381822 | orchestrator | "testbed-node-2" 2025-09-19 17:25:08.381831 | orchestrator | ], 2025-09-19 17:25:08.381839 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-09-19 17:25:08.381849 | orchestrator | "quorum_age": 1686, 2025-09-19 17:25:08.381857 | orchestrator | "features": { 2025-09-19 17:25:08.381866 | orchestrator | "quorum_con": "4540138322906710015", 2025-09-19 17:25:08.382088 | orchestrator | "quorum_mon": [ 2025-09-19 17:25:08.382097 | orchestrator | "kraken", 2025-09-19 17:25:08.382105 | orchestrator | "luminous", 2025-09-19 17:25:08.382114 | orchestrator | "mimic", 2025-09-19 17:25:08.382122 | orchestrator | "osdmap-prune", 2025-09-19 17:25:08.382148 | orchestrator | "nautilus", 2025-09-19 17:25:08.382157 | orchestrator | "octopus", 2025-09-19 17:25:08.382166 | orchestrator | "pacific", 2025-09-19 17:25:08.382236 | orchestrator | "elector-pinging", 2025-09-19 17:25:08.382247 | orchestrator | "quincy", 2025-09-19 17:25:08.382256 | orchestrator | "reef" 2025-09-19 17:25:08.382264 | orchestrator | ] 2025-09-19 17:25:08.382273 | orchestrator | }, 2025-09-19 17:25:08.382282 | orchestrator | "monmap": { 2025-09-19 17:25:08.382290 | orchestrator | "epoch": 1, 2025-09-19 17:25:08.382299 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-09-19 17:25:08.382308 | orchestrator | "modified": "2025-09-19T16:56:43.839080Z", 2025-09-19 17:25:08.382316 | orchestrator | "created": "2025-09-19T16:56:43.839080Z", 2025-09-19 17:25:08.382325 | orchestrator | "min_mon_release": 18, 2025-09-19 17:25:08.382333 | orchestrator | "min_mon_release_name": "reef", 2025-09-19 17:25:08.382342 | orchestrator | "election_strategy": 1, 2025-09-19 17:25:08.382350 | orchestrator | "disallowed_leaders: ": "", 2025-09-19 17:25:08.382359 | orchestrator | "stretch_mode": false, 2025-09-19 17:25:08.382367 | orchestrator | "tiebreaker_mon": "", 2025-09-19 17:25:08.382375 | orchestrator | "removed_ranks: ": "", 2025-09-19 17:25:08.382384 | orchestrator | "features": { 2025-09-19 17:25:08.382392 | orchestrator | "persistent": [ 2025-09-19 17:25:08.382401 | orchestrator | "kraken", 2025-09-19 17:25:08.382409 | orchestrator | "luminous", 2025-09-19 17:25:08.382418 | orchestrator | "mimic", 2025-09-19 17:25:08.382426 | orchestrator | "osdmap-prune", 2025-09-19 17:25:08.382435 | orchestrator | "nautilus", 2025-09-19 17:25:08.382443 | orchestrator | "octopus", 2025-09-19 17:25:08.382452 | orchestrator | "pacific", 2025-09-19 17:25:08.382460 | orchestrator | "elector-pinging", 2025-09-19 17:25:08.382469 | orchestrator | "quincy", 2025-09-19 17:25:08.382477 | orchestrator | "reef" 2025-09-19 17:25:08.382486 | orchestrator | ], 2025-09-19 17:25:08.382494 | orchestrator | "optional": [] 2025-09-19 17:25:08.382503 | orchestrator | }, 2025-09-19 17:25:08.382511 | orchestrator | "mons": [ 2025-09-19 17:25:08.382520 | orchestrator | { 2025-09-19 17:25:08.382528 | orchestrator | "rank": 0, 2025-09-19 17:25:08.382537 | orchestrator | "name": "testbed-node-0", 2025-09-19 17:25:08.382545 | orchestrator | "public_addrs": { 2025-09-19 17:25:08.382554 | orchestrator | "addrvec": [ 2025-09-19 17:25:08.382562 | orchestrator | { 2025-09-19 17:25:08.382571 | orchestrator | "type": "v2", 2025-09-19 17:25:08.382579 | orchestrator | "addr": "192.168.16.10:3300", 2025-09-19 17:25:08.382588 | orchestrator | "nonce": 0 2025-09-19 17:25:08.382596 | orchestrator | }, 2025-09-19 17:25:08.382605 | orchestrator | { 2025-09-19 17:25:08.382613 | orchestrator | "type": "v1", 2025-09-19 17:25:08.382622 | orchestrator | "addr": "192.168.16.10:6789", 2025-09-19 17:25:08.382630 | orchestrator | "nonce": 0 2025-09-19 17:25:08.382639 | orchestrator | } 2025-09-19 17:25:08.382648 | orchestrator | ] 2025-09-19 17:25:08.382656 | orchestrator | }, 2025-09-19 17:25:08.382665 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-09-19 17:25:08.382675 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-09-19 17:25:08.382684 | orchestrator | "priority": 0, 2025-09-19 17:25:08.382694 | orchestrator | "weight": 0, 2025-09-19 17:25:08.382704 | orchestrator | "crush_location": "{}" 2025-09-19 17:25:08.382714 | orchestrator | }, 2025-09-19 17:25:08.382725 | orchestrator | { 2025-09-19 17:25:08.382734 | orchestrator | "rank": 1, 2025-09-19 17:25:08.382744 | orchestrator | "name": "testbed-node-1", 2025-09-19 17:25:08.382754 | orchestrator | "public_addrs": { 2025-09-19 17:25:08.382764 | orchestrator | "addrvec": [ 2025-09-19 17:25:08.382774 | orchestrator | { 2025-09-19 17:25:08.382783 | orchestrator | "type": "v2", 2025-09-19 17:25:08.382793 | orchestrator | "addr": "192.168.16.11:3300", 2025-09-19 17:25:08.382802 | orchestrator | "nonce": 0 2025-09-19 17:25:08.382812 | orchestrator | }, 2025-09-19 17:25:08.382822 | orchestrator | { 2025-09-19 17:25:08.382832 | orchestrator | "type": "v1", 2025-09-19 17:25:08.382841 | orchestrator | "addr": "192.168.16.11:6789", 2025-09-19 17:25:08.382850 | orchestrator | "nonce": 0 2025-09-19 17:25:08.382861 | orchestrator | } 2025-09-19 17:25:08.382871 | orchestrator | ] 2025-09-19 17:25:08.382881 | orchestrator | }, 2025-09-19 17:25:08.382890 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-09-19 17:25:08.382900 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-09-19 17:25:08.382916 | orchestrator | "priority": 0, 2025-09-19 17:25:08.382927 | orchestrator | "weight": 0, 2025-09-19 17:25:08.382937 | orchestrator | "crush_location": "{}" 2025-09-19 17:25:08.382946 | orchestrator | }, 2025-09-19 17:25:08.382955 | orchestrator | { 2025-09-19 17:25:08.382963 | orchestrator | "rank": 2, 2025-09-19 17:25:08.382972 | orchestrator | "name": "testbed-node-2", 2025-09-19 17:25:08.382981 | orchestrator | "public_addrs": { 2025-09-19 17:25:08.382989 | orchestrator | "addrvec": [ 2025-09-19 17:25:08.382998 | orchestrator | { 2025-09-19 17:25:08.383006 | orchestrator | "type": "v2", 2025-09-19 17:25:08.383015 | orchestrator | "addr": "192.168.16.12:3300", 2025-09-19 17:25:08.383023 | orchestrator | "nonce": 0 2025-09-19 17:25:08.383031 | orchestrator | }, 2025-09-19 17:25:08.383040 | orchestrator | { 2025-09-19 17:25:08.383048 | orchestrator | "type": "v1", 2025-09-19 17:25:08.383057 | orchestrator | "addr": "192.168.16.12:6789", 2025-09-19 17:25:08.383065 | orchestrator | "nonce": 0 2025-09-19 17:25:08.383074 | orchestrator | } 2025-09-19 17:25:08.383082 | orchestrator | ] 2025-09-19 17:25:08.383091 | orchestrator | }, 2025-09-19 17:25:08.383099 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-09-19 17:25:08.383108 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-09-19 17:25:08.383116 | orchestrator | "priority": 0, 2025-09-19 17:25:08.383125 | orchestrator | "weight": 0, 2025-09-19 17:25:08.383133 | orchestrator | "crush_location": "{}" 2025-09-19 17:25:08.383142 | orchestrator | } 2025-09-19 17:25:08.383150 | orchestrator | ] 2025-09-19 17:25:08.383159 | orchestrator | } 2025-09-19 17:25:08.383167 | orchestrator | } 2025-09-19 17:25:08.383199 | orchestrator | 2025-09-19 17:25:08.383209 | orchestrator | # Ceph free space status 2025-09-19 17:25:08.383217 | orchestrator | 2025-09-19 17:25:08.383226 | orchestrator | + echo 2025-09-19 17:25:08.383234 | orchestrator | + echo '# Ceph free space status' 2025-09-19 17:25:08.383243 | orchestrator | + echo 2025-09-19 17:25:08.383252 | orchestrator | + ceph df 2025-09-19 17:25:08.952772 | orchestrator | --- RAW STORAGE --- 2025-09-19 17:25:08.952851 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-09-19 17:25:08.952875 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-09-19 17:25:08.952886 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.91 2025-09-19 17:25:08.952897 | orchestrator | 2025-09-19 17:25:08.952908 | orchestrator | --- POOLS --- 2025-09-19 17:25:08.952920 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-09-19 17:25:08.952931 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-09-19 17:25:08.952942 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-09-19 17:25:08.952952 | orchestrator | cephfs_metadata 3 32 4.4 KiB 22 96 KiB 0 35 GiB 2025-09-19 17:25:08.952963 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-09-19 17:25:08.952974 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-09-19 17:25:08.952985 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-09-19 17:25:08.952995 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-09-19 17:25:08.953006 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-09-19 17:25:08.953017 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-09-19 17:25:08.953027 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 17:25:08.953050 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 17:25:08.953061 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2025-09-19 17:25:08.953072 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 17:25:08.953083 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-09-19 17:25:08.998505 | orchestrator | ++ semver latest 5.0.0 2025-09-19 17:25:09.061104 | orchestrator | + [[ -1 -eq -1 ]] 2025-09-19 17:25:09.061158 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-19 17:25:09.061167 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-09-19 17:25:09.061214 | orchestrator | + osism apply facts 2025-09-19 17:25:21.055334 | orchestrator | 2025-09-19 17:25:21 | INFO  | Task 8f08012b-bf02-497c-bce4-e3aa06942050 (facts) was prepared for execution. 2025-09-19 17:25:21.055429 | orchestrator | 2025-09-19 17:25:21 | INFO  | It takes a moment until task 8f08012b-bf02-497c-bce4-e3aa06942050 (facts) has been started and output is visible here. 2025-09-19 17:25:34.122333 | orchestrator | 2025-09-19 17:25:34.122444 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-19 17:25:34.122459 | orchestrator | 2025-09-19 17:25:34.122471 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-19 17:25:34.122483 | orchestrator | Friday 19 September 2025 17:25:25 +0000 (0:00:00.270) 0:00:00.270 ****** 2025-09-19 17:25:34.122494 | orchestrator | ok: [testbed-manager] 2025-09-19 17:25:34.122506 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:25:34.122517 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:25:34.122528 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:25:34.122538 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:25:34.122549 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:25:34.122559 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:25:34.122570 | orchestrator | 2025-09-19 17:25:34.122581 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-19 17:25:34.122592 | orchestrator | Friday 19 September 2025 17:25:26 +0000 (0:00:01.511) 0:00:01.782 ****** 2025-09-19 17:25:34.122603 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:25:34.122614 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:25:34.122624 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:25:34.122635 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:25:34.122646 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:25:34.122656 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:25:34.122667 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:25:34.122678 | orchestrator | 2025-09-19 17:25:34.122688 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-19 17:25:34.122699 | orchestrator | 2025-09-19 17:25:34.122710 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-19 17:25:34.122720 | orchestrator | Friday 19 September 2025 17:25:27 +0000 (0:00:01.245) 0:00:03.027 ****** 2025-09-19 17:25:34.122731 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:25:34.122742 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:25:34.122752 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:25:34.122763 | orchestrator | ok: [testbed-manager] 2025-09-19 17:25:34.122773 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:25:34.122784 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:25:34.122794 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:25:34.122805 | orchestrator | 2025-09-19 17:25:34.122816 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-19 17:25:34.122827 | orchestrator | 2025-09-19 17:25:34.122837 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-19 17:25:34.122848 | orchestrator | Friday 19 September 2025 17:25:33 +0000 (0:00:05.244) 0:00:08.272 ****** 2025-09-19 17:25:34.122859 | orchestrator | skipping: [testbed-manager] 2025-09-19 17:25:34.122870 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:25:34.122882 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:25:34.122894 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:25:34.122907 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:25:34.122919 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:25:34.122932 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:25:34.122944 | orchestrator | 2025-09-19 17:25:34.122957 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:25:34.122970 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.122983 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123013 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123063 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123083 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123101 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123119 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:25:34.123139 | orchestrator | 2025-09-19 17:25:34.123157 | orchestrator | 2025-09-19 17:25:34.123197 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:25:34.123210 | orchestrator | Friday 19 September 2025 17:25:33 +0000 (0:00:00.543) 0:00:08.815 ****** 2025-09-19 17:25:34.123221 | orchestrator | =============================================================================== 2025-09-19 17:25:34.123232 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.24s 2025-09-19 17:25:34.123242 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.51s 2025-09-19 17:25:34.123252 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.25s 2025-09-19 17:25:34.123263 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-09-19 17:25:34.412364 | orchestrator | + osism validate ceph-mons 2025-09-19 17:26:06.108062 | orchestrator | 2025-09-19 17:26:06.108160 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-09-19 17:26:06.108220 | orchestrator | 2025-09-19 17:26:06.108233 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 17:26:06.108244 | orchestrator | Friday 19 September 2025 17:25:50 +0000 (0:00:00.433) 0:00:00.433 ****** 2025-09-19 17:26:06.108254 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.108264 | orchestrator | 2025-09-19 17:26:06.108274 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 17:26:06.108284 | orchestrator | Friday 19 September 2025 17:25:51 +0000 (0:00:00.658) 0:00:01.091 ****** 2025-09-19 17:26:06.108294 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.108303 | orchestrator | 2025-09-19 17:26:06.108313 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 17:26:06.108322 | orchestrator | Friday 19 September 2025 17:25:52 +0000 (0:00:00.837) 0:00:01.929 ****** 2025-09-19 17:26:06.108332 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108343 | orchestrator | 2025-09-19 17:26:06.108352 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 17:26:06.108362 | orchestrator | Friday 19 September 2025 17:25:52 +0000 (0:00:00.252) 0:00:02.181 ****** 2025-09-19 17:26:06.108372 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108382 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:06.108392 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:06.108402 | orchestrator | 2025-09-19 17:26:06.108412 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 17:26:06.108422 | orchestrator | Friday 19 September 2025 17:25:52 +0000 (0:00:00.273) 0:00:02.455 ****** 2025-09-19 17:26:06.108431 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108441 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:06.108450 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:06.108460 | orchestrator | 2025-09-19 17:26:06.108470 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 17:26:06.108479 | orchestrator | Friday 19 September 2025 17:25:53 +0000 (0:00:01.007) 0:00:03.462 ****** 2025-09-19 17:26:06.108489 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.108521 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:26:06.108532 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:26:06.108541 | orchestrator | 2025-09-19 17:26:06.108550 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 17:26:06.108560 | orchestrator | Friday 19 September 2025 17:25:54 +0000 (0:00:00.279) 0:00:03.742 ****** 2025-09-19 17:26:06.108570 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108579 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:06.108589 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:06.108598 | orchestrator | 2025-09-19 17:26:06.108609 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:26:06.108620 | orchestrator | Friday 19 September 2025 17:25:54 +0000 (0:00:00.447) 0:00:04.190 ****** 2025-09-19 17:26:06.108631 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108642 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:06.108654 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:06.108665 | orchestrator | 2025-09-19 17:26:06.108676 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-09-19 17:26:06.108688 | orchestrator | Friday 19 September 2025 17:25:54 +0000 (0:00:00.287) 0:00:04.478 ****** 2025-09-19 17:26:06.108698 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.108709 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:26:06.108720 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:26:06.108731 | orchestrator | 2025-09-19 17:26:06.108742 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-09-19 17:26:06.108754 | orchestrator | Friday 19 September 2025 17:25:55 +0000 (0:00:00.279) 0:00:04.757 ****** 2025-09-19 17:26:06.108764 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.108776 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:06.108786 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:06.108797 | orchestrator | 2025-09-19 17:26:06.108808 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:26:06.108819 | orchestrator | Friday 19 September 2025 17:25:55 +0000 (0:00:00.287) 0:00:05.044 ****** 2025-09-19 17:26:06.108830 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.108841 | orchestrator | 2025-09-19 17:26:06.108852 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:26:06.108863 | orchestrator | Friday 19 September 2025 17:25:55 +0000 (0:00:00.246) 0:00:05.291 ****** 2025-09-19 17:26:06.108874 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.108885 | orchestrator | 2025-09-19 17:26:06.108896 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:26:06.108907 | orchestrator | Friday 19 September 2025 17:25:56 +0000 (0:00:00.449) 0:00:05.741 ****** 2025-09-19 17:26:06.108918 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.108929 | orchestrator | 2025-09-19 17:26:06.108940 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:06.108951 | orchestrator | Friday 19 September 2025 17:25:56 +0000 (0:00:00.609) 0:00:06.350 ****** 2025-09-19 17:26:06.108963 | orchestrator | 2025-09-19 17:26:06.108974 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:06.108984 | orchestrator | Friday 19 September 2025 17:25:56 +0000 (0:00:00.066) 0:00:06.416 ****** 2025-09-19 17:26:06.108994 | orchestrator | 2025-09-19 17:26:06.109003 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:06.109013 | orchestrator | Friday 19 September 2025 17:25:56 +0000 (0:00:00.065) 0:00:06.482 ****** 2025-09-19 17:26:06.109022 | orchestrator | 2025-09-19 17:26:06.109032 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:26:06.109041 | orchestrator | Friday 19 September 2025 17:25:56 +0000 (0:00:00.069) 0:00:06.552 ****** 2025-09-19 17:26:06.109051 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109060 | orchestrator | 2025-09-19 17:26:06.109070 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 17:26:06.109080 | orchestrator | Friday 19 September 2025 17:25:57 +0000 (0:00:00.252) 0:00:06.805 ****** 2025-09-19 17:26:06.109098 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109107 | orchestrator | 2025-09-19 17:26:06.109133 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-09-19 17:26:06.109159 | orchestrator | Friday 19 September 2025 17:25:57 +0000 (0:00:00.238) 0:00:07.043 ****** 2025-09-19 17:26:06.109169 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109196 | orchestrator | 2025-09-19 17:26:06.109206 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-09-19 17:26:06.109216 | orchestrator | Friday 19 September 2025 17:25:57 +0000 (0:00:00.102) 0:00:07.146 ****** 2025-09-19 17:26:06.109225 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:26:06.109235 | orchestrator | 2025-09-19 17:26:06.109244 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-09-19 17:26:06.109254 | orchestrator | Friday 19 September 2025 17:25:58 +0000 (0:00:01.517) 0:00:08.663 ****** 2025-09-19 17:26:06.109263 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109273 | orchestrator | 2025-09-19 17:26:06.109282 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-09-19 17:26:06.109292 | orchestrator | Friday 19 September 2025 17:25:59 +0000 (0:00:00.310) 0:00:08.974 ****** 2025-09-19 17:26:06.109301 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109311 | orchestrator | 2025-09-19 17:26:06.109320 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-09-19 17:26:06.109330 | orchestrator | Friday 19 September 2025 17:25:59 +0000 (0:00:00.133) 0:00:09.108 ****** 2025-09-19 17:26:06.109339 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109348 | orchestrator | 2025-09-19 17:26:06.109358 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-09-19 17:26:06.109368 | orchestrator | Friday 19 September 2025 17:25:59 +0000 (0:00:00.309) 0:00:09.418 ****** 2025-09-19 17:26:06.109377 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109387 | orchestrator | 2025-09-19 17:26:06.109396 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-09-19 17:26:06.109406 | orchestrator | Friday 19 September 2025 17:26:00 +0000 (0:00:00.471) 0:00:09.890 ****** 2025-09-19 17:26:06.109415 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109425 | orchestrator | 2025-09-19 17:26:06.109434 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-09-19 17:26:06.109443 | orchestrator | Friday 19 September 2025 17:26:00 +0000 (0:00:00.105) 0:00:09.996 ****** 2025-09-19 17:26:06.109453 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109462 | orchestrator | 2025-09-19 17:26:06.109535 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-09-19 17:26:06.109586 | orchestrator | Friday 19 September 2025 17:26:00 +0000 (0:00:00.133) 0:00:10.129 ****** 2025-09-19 17:26:06.109598 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109608 | orchestrator | 2025-09-19 17:26:06.109617 | orchestrator | TASK [Gather status data] ****************************************************** 2025-09-19 17:26:06.109627 | orchestrator | Friday 19 September 2025 17:26:00 +0000 (0:00:00.124) 0:00:10.253 ****** 2025-09-19 17:26:06.109636 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:26:06.109646 | orchestrator | 2025-09-19 17:26:06.109655 | orchestrator | TASK [Set health test data] **************************************************** 2025-09-19 17:26:06.109665 | orchestrator | Friday 19 September 2025 17:26:01 +0000 (0:00:01.388) 0:00:11.642 ****** 2025-09-19 17:26:06.109674 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109684 | orchestrator | 2025-09-19 17:26:06.109693 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-09-19 17:26:06.109703 | orchestrator | Friday 19 September 2025 17:26:02 +0000 (0:00:00.304) 0:00:11.946 ****** 2025-09-19 17:26:06.109712 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109721 | orchestrator | 2025-09-19 17:26:06.109731 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-09-19 17:26:06.109740 | orchestrator | Friday 19 September 2025 17:26:02 +0000 (0:00:00.145) 0:00:12.092 ****** 2025-09-19 17:26:06.109758 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:06.109768 | orchestrator | 2025-09-19 17:26:06.109777 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-09-19 17:26:06.109787 | orchestrator | Friday 19 September 2025 17:26:02 +0000 (0:00:00.149) 0:00:12.241 ****** 2025-09-19 17:26:06.109796 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109805 | orchestrator | 2025-09-19 17:26:06.109815 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-09-19 17:26:06.109830 | orchestrator | Friday 19 September 2025 17:26:02 +0000 (0:00:00.141) 0:00:12.383 ****** 2025-09-19 17:26:06.109840 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109850 | orchestrator | 2025-09-19 17:26:06.109859 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 17:26:06.109869 | orchestrator | Friday 19 September 2025 17:26:02 +0000 (0:00:00.134) 0:00:12.517 ****** 2025-09-19 17:26:06.109878 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.109888 | orchestrator | 2025-09-19 17:26:06.109897 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 17:26:06.109907 | orchestrator | Friday 19 September 2025 17:26:03 +0000 (0:00:00.264) 0:00:12.782 ****** 2025-09-19 17:26:06.109916 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:06.109926 | orchestrator | 2025-09-19 17:26:06.109935 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:26:06.109945 | orchestrator | Friday 19 September 2025 17:26:03 +0000 (0:00:00.632) 0:00:13.414 ****** 2025-09-19 17:26:06.109954 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.109964 | orchestrator | 2025-09-19 17:26:06.109973 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:26:06.109983 | orchestrator | Friday 19 September 2025 17:26:05 +0000 (0:00:01.585) 0:00:15.000 ****** 2025-09-19 17:26:06.109993 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.110002 | orchestrator | 2025-09-19 17:26:06.110011 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:26:06.110072 | orchestrator | Friday 19 September 2025 17:26:05 +0000 (0:00:00.278) 0:00:15.279 ****** 2025-09-19 17:26:06.110083 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:06.110092 | orchestrator | 2025-09-19 17:26:06.110112 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:08.181744 | orchestrator | Friday 19 September 2025 17:26:05 +0000 (0:00:00.275) 0:00:15.554 ****** 2025-09-19 17:26:08.181846 | orchestrator | 2025-09-19 17:26:08.181861 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:08.181873 | orchestrator | Friday 19 September 2025 17:26:05 +0000 (0:00:00.062) 0:00:15.617 ****** 2025-09-19 17:26:08.181884 | orchestrator | 2025-09-19 17:26:08.181895 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:08.181906 | orchestrator | Friday 19 September 2025 17:26:06 +0000 (0:00:00.069) 0:00:15.686 ****** 2025-09-19 17:26:08.181916 | orchestrator | 2025-09-19 17:26:08.181927 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 17:26:08.181937 | orchestrator | Friday 19 September 2025 17:26:06 +0000 (0:00:00.075) 0:00:15.762 ****** 2025-09-19 17:26:08.181948 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:08.181959 | orchestrator | 2025-09-19 17:26:08.181970 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:26:08.181981 | orchestrator | Friday 19 September 2025 17:26:07 +0000 (0:00:01.281) 0:00:17.044 ****** 2025-09-19 17:26:08.181991 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 17:26:08.182002 | orchestrator |  "msg": [ 2025-09-19 17:26:08.182013 | orchestrator |  "Validator run completed.", 2025-09-19 17:26:08.182132 | orchestrator |  "You can find the report file here:", 2025-09-19 17:26:08.182168 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-09-19T17:25:51+00:00-report.json", 2025-09-19 17:26:08.182211 | orchestrator |  "on the following host:", 2025-09-19 17:26:08.182224 | orchestrator |  "testbed-manager" 2025-09-19 17:26:08.182235 | orchestrator |  ] 2025-09-19 17:26:08.182246 | orchestrator | } 2025-09-19 17:26:08.182257 | orchestrator | 2025-09-19 17:26:08.182268 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:26:08.182284 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-19 17:26:08.182299 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:26:08.182312 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:26:08.182325 | orchestrator | 2025-09-19 17:26:08.182338 | orchestrator | 2025-09-19 17:26:08.182350 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:26:08.182362 | orchestrator | Friday 19 September 2025 17:26:07 +0000 (0:00:00.407) 0:00:17.451 ****** 2025-09-19 17:26:08.182374 | orchestrator | =============================================================================== 2025-09-19 17:26:08.182386 | orchestrator | Aggregate test results step one ----------------------------------------- 1.59s 2025-09-19 17:26:08.182398 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.52s 2025-09-19 17:26:08.182411 | orchestrator | Gather status data ------------------------------------------------------ 1.39s 2025-09-19 17:26:08.182423 | orchestrator | Write report file ------------------------------------------------------- 1.28s 2025-09-19 17:26:08.182435 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-09-19 17:26:08.182447 | orchestrator | Create report output directory ------------------------------------------ 0.84s 2025-09-19 17:26:08.182459 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-09-19 17:26:08.182471 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.63s 2025-09-19 17:26:08.182483 | orchestrator | Aggregate test results step three --------------------------------------- 0.61s 2025-09-19 17:26:08.182496 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.47s 2025-09-19 17:26:08.182508 | orchestrator | Aggregate test results step two ----------------------------------------- 0.45s 2025-09-19 17:26:08.182521 | orchestrator | Set test result to passed if container is existing ---------------------- 0.45s 2025-09-19 17:26:08.182533 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-09-19 17:26:08.182545 | orchestrator | Set quorum test data ---------------------------------------------------- 0.31s 2025-09-19 17:26:08.182557 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2025-09-19 17:26:08.182570 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-09-19 17:26:08.182581 | orchestrator | Prepare test data ------------------------------------------------------- 0.29s 2025-09-19 17:26:08.182593 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-09-19 17:26:08.182606 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-09-19 17:26:08.182619 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.28s 2025-09-19 17:26:08.466344 | orchestrator | + osism validate ceph-mgrs 2025-09-19 17:26:39.329501 | orchestrator | 2025-09-19 17:26:39.329595 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-09-19 17:26:39.329608 | orchestrator | 2025-09-19 17:26:39.329618 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 17:26:39.329627 | orchestrator | Friday 19 September 2025 17:26:24 +0000 (0:00:00.425) 0:00:00.425 ****** 2025-09-19 17:26:39.329636 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.329665 | orchestrator | 2025-09-19 17:26:39.329675 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 17:26:39.329683 | orchestrator | Friday 19 September 2025 17:26:25 +0000 (0:00:00.677) 0:00:01.103 ****** 2025-09-19 17:26:39.329692 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.329700 | orchestrator | 2025-09-19 17:26:39.329709 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 17:26:39.329717 | orchestrator | Friday 19 September 2025 17:26:26 +0000 (0:00:00.825) 0:00:01.928 ****** 2025-09-19 17:26:39.329726 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.329735 | orchestrator | 2025-09-19 17:26:39.329744 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-09-19 17:26:39.329752 | orchestrator | Friday 19 September 2025 17:26:26 +0000 (0:00:00.253) 0:00:02.182 ****** 2025-09-19 17:26:39.329761 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.329770 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:39.329778 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:39.329787 | orchestrator | 2025-09-19 17:26:39.329795 | orchestrator | TASK [Get container info] ****************************************************** 2025-09-19 17:26:39.329804 | orchestrator | Friday 19 September 2025 17:26:26 +0000 (0:00:00.287) 0:00:02.470 ****** 2025-09-19 17:26:39.329812 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.329821 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:39.329829 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:39.329837 | orchestrator | 2025-09-19 17:26:39.329846 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-09-19 17:26:39.329855 | orchestrator | Friday 19 September 2025 17:26:27 +0000 (0:00:00.974) 0:00:03.444 ****** 2025-09-19 17:26:39.329863 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.329872 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:26:39.329880 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:26:39.329889 | orchestrator | 2025-09-19 17:26:39.329897 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-09-19 17:26:39.329906 | orchestrator | Friday 19 September 2025 17:26:28 +0000 (0:00:00.280) 0:00:03.725 ****** 2025-09-19 17:26:39.329914 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.329923 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:39.329931 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:39.329940 | orchestrator | 2025-09-19 17:26:39.329948 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:26:39.329972 | orchestrator | Friday 19 September 2025 17:26:28 +0000 (0:00:00.471) 0:00:04.197 ****** 2025-09-19 17:26:39.329981 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.329989 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:39.329998 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:39.330006 | orchestrator | 2025-09-19 17:26:39.330062 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-09-19 17:26:39.330075 | orchestrator | Friday 19 September 2025 17:26:28 +0000 (0:00:00.311) 0:00:04.508 ****** 2025-09-19 17:26:39.330085 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330095 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:26:39.330105 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:26:39.330115 | orchestrator | 2025-09-19 17:26:39.330125 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-09-19 17:26:39.330135 | orchestrator | Friday 19 September 2025 17:26:29 +0000 (0:00:00.279) 0:00:04.787 ****** 2025-09-19 17:26:39.330145 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.330154 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:26:39.330164 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:26:39.330173 | orchestrator | 2025-09-19 17:26:39.330213 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:26:39.330223 | orchestrator | Friday 19 September 2025 17:26:29 +0000 (0:00:00.298) 0:00:05.086 ****** 2025-09-19 17:26:39.330238 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330263 | orchestrator | 2025-09-19 17:26:39.330277 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:26:39.330291 | orchestrator | Friday 19 September 2025 17:26:29 +0000 (0:00:00.249) 0:00:05.336 ****** 2025-09-19 17:26:39.330307 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330322 | orchestrator | 2025-09-19 17:26:39.330337 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:26:39.330347 | orchestrator | Friday 19 September 2025 17:26:30 +0000 (0:00:00.430) 0:00:05.766 ****** 2025-09-19 17:26:39.330357 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330366 | orchestrator | 2025-09-19 17:26:39.330382 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330392 | orchestrator | Friday 19 September 2025 17:26:30 +0000 (0:00:00.616) 0:00:06.383 ****** 2025-09-19 17:26:39.330401 | orchestrator | 2025-09-19 17:26:39.330410 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330418 | orchestrator | Friday 19 September 2025 17:26:30 +0000 (0:00:00.067) 0:00:06.450 ****** 2025-09-19 17:26:39.330427 | orchestrator | 2025-09-19 17:26:39.330435 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330443 | orchestrator | Friday 19 September 2025 17:26:30 +0000 (0:00:00.065) 0:00:06.516 ****** 2025-09-19 17:26:39.330452 | orchestrator | 2025-09-19 17:26:39.330460 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:26:39.330469 | orchestrator | Friday 19 September 2025 17:26:30 +0000 (0:00:00.069) 0:00:06.585 ****** 2025-09-19 17:26:39.330477 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330486 | orchestrator | 2025-09-19 17:26:39.330494 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-09-19 17:26:39.330503 | orchestrator | Friday 19 September 2025 17:26:31 +0000 (0:00:00.270) 0:00:06.856 ****** 2025-09-19 17:26:39.330511 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330520 | orchestrator | 2025-09-19 17:26:39.330543 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-09-19 17:26:39.330552 | orchestrator | Friday 19 September 2025 17:26:31 +0000 (0:00:00.281) 0:00:07.138 ****** 2025-09-19 17:26:39.330561 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.330569 | orchestrator | 2025-09-19 17:26:39.330578 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-09-19 17:26:39.330586 | orchestrator | Friday 19 September 2025 17:26:31 +0000 (0:00:00.134) 0:00:07.272 ****** 2025-09-19 17:26:39.330595 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:26:39.330603 | orchestrator | 2025-09-19 17:26:39.330612 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-09-19 17:26:39.330620 | orchestrator | Friday 19 September 2025 17:26:33 +0000 (0:00:01.983) 0:00:09.256 ****** 2025-09-19 17:26:39.330629 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.330637 | orchestrator | 2025-09-19 17:26:39.330646 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-09-19 17:26:39.330654 | orchestrator | Friday 19 September 2025 17:26:33 +0000 (0:00:00.247) 0:00:09.503 ****** 2025-09-19 17:26:39.330663 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.330671 | orchestrator | 2025-09-19 17:26:39.330680 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-09-19 17:26:39.330688 | orchestrator | Friday 19 September 2025 17:26:34 +0000 (0:00:00.333) 0:00:09.836 ****** 2025-09-19 17:26:39.330697 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330705 | orchestrator | 2025-09-19 17:26:39.330714 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-09-19 17:26:39.330722 | orchestrator | Friday 19 September 2025 17:26:34 +0000 (0:00:00.132) 0:00:09.968 ****** 2025-09-19 17:26:39.330731 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:26:39.330739 | orchestrator | 2025-09-19 17:26:39.330748 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 17:26:39.330756 | orchestrator | Friday 19 September 2025 17:26:34 +0000 (0:00:00.326) 0:00:10.294 ****** 2025-09-19 17:26:39.330772 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.330781 | orchestrator | 2025-09-19 17:26:39.330789 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 17:26:39.330798 | orchestrator | Friday 19 September 2025 17:26:34 +0000 (0:00:00.280) 0:00:10.575 ****** 2025-09-19 17:26:39.330806 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:26:39.330815 | orchestrator | 2025-09-19 17:26:39.330823 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:26:39.330832 | orchestrator | Friday 19 September 2025 17:26:35 +0000 (0:00:00.300) 0:00:10.875 ****** 2025-09-19 17:26:39.330840 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.330849 | orchestrator | 2025-09-19 17:26:39.330857 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:26:39.330866 | orchestrator | Friday 19 September 2025 17:26:36 +0000 (0:00:01.235) 0:00:12.110 ****** 2025-09-19 17:26:39.330874 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.330883 | orchestrator | 2025-09-19 17:26:39.330891 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:26:39.330900 | orchestrator | Friday 19 September 2025 17:26:36 +0000 (0:00:00.243) 0:00:12.354 ****** 2025-09-19 17:26:39.330908 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.330917 | orchestrator | 2025-09-19 17:26:39.330925 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330934 | orchestrator | Friday 19 September 2025 17:26:37 +0000 (0:00:00.266) 0:00:12.620 ****** 2025-09-19 17:26:39.330942 | orchestrator | 2025-09-19 17:26:39.330951 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330960 | orchestrator | Friday 19 September 2025 17:26:37 +0000 (0:00:00.068) 0:00:12.689 ****** 2025-09-19 17:26:39.330968 | orchestrator | 2025-09-19 17:26:39.330977 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:26:39.330985 | orchestrator | Friday 19 September 2025 17:26:37 +0000 (0:00:00.067) 0:00:12.756 ****** 2025-09-19 17:26:39.330994 | orchestrator | 2025-09-19 17:26:39.331002 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 17:26:39.331011 | orchestrator | Friday 19 September 2025 17:26:37 +0000 (0:00:00.072) 0:00:12.829 ****** 2025-09-19 17:26:39.331019 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-09-19 17:26:39.331028 | orchestrator | 2025-09-19 17:26:39.331036 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:26:39.331045 | orchestrator | Friday 19 September 2025 17:26:38 +0000 (0:00:01.517) 0:00:14.346 ****** 2025-09-19 17:26:39.331053 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-09-19 17:26:39.331066 | orchestrator |  "msg": [ 2025-09-19 17:26:39.331075 | orchestrator |  "Validator run completed.", 2025-09-19 17:26:39.331084 | orchestrator |  "You can find the report file here:", 2025-09-19 17:26:39.331092 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-09-19T17:26:25+00:00-report.json", 2025-09-19 17:26:39.331101 | orchestrator |  "on the following host:", 2025-09-19 17:26:39.331110 | orchestrator |  "testbed-manager" 2025-09-19 17:26:39.331118 | orchestrator |  ] 2025-09-19 17:26:39.331127 | orchestrator | } 2025-09-19 17:26:39.331136 | orchestrator | 2025-09-19 17:26:39.331144 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:26:39.331154 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 17:26:39.331163 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:26:39.331197 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:26:39.597393 | orchestrator | 2025-09-19 17:26:39.597495 | orchestrator | 2025-09-19 17:26:39.597508 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:26:39.597522 | orchestrator | Friday 19 September 2025 17:26:39 +0000 (0:00:00.556) 0:00:14.902 ****** 2025-09-19 17:26:39.597533 | orchestrator | =============================================================================== 2025-09-19 17:26:39.597544 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.98s 2025-09-19 17:26:39.597554 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2025-09-19 17:26:39.597566 | orchestrator | Aggregate test results step one ----------------------------------------- 1.24s 2025-09-19 17:26:39.597576 | orchestrator | Get container info ------------------------------------------------------ 0.97s 2025-09-19 17:26:39.597587 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-09-19 17:26:39.597597 | orchestrator | Get timestamp for report file ------------------------------------------- 0.68s 2025-09-19 17:26:39.597608 | orchestrator | Aggregate test results step three --------------------------------------- 0.62s 2025-09-19 17:26:39.597619 | orchestrator | Print report file information ------------------------------------------- 0.56s 2025-09-19 17:26:39.597629 | orchestrator | Set test result to passed if container is existing ---------------------- 0.47s 2025-09-19 17:26:39.597640 | orchestrator | Aggregate test results step two ----------------------------------------- 0.43s 2025-09-19 17:26:39.597650 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.33s 2025-09-19 17:26:39.597661 | orchestrator | Pass test if required mgr modules are enabled --------------------------- 0.33s 2025-09-19 17:26:39.597672 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-09-19 17:26:39.597682 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2025-09-19 17:26:39.597693 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.30s 2025-09-19 17:26:39.597703 | orchestrator | Prepare test data for container existance test -------------------------- 0.29s 2025-09-19 17:26:39.597714 | orchestrator | Fail due to missing containers ------------------------------------------ 0.28s 2025-09-19 17:26:39.597724 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-09-19 17:26:39.597735 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.28s 2025-09-19 17:26:39.597746 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2025-09-19 17:26:39.855907 | orchestrator | + osism validate ceph-osds 2025-09-19 17:27:00.326502 | orchestrator | 2025-09-19 17:27:00.326612 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-09-19 17:27:00.326628 | orchestrator | 2025-09-19 17:27:00.326640 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-09-19 17:27:00.326652 | orchestrator | Friday 19 September 2025 17:26:56 +0000 (0:00:00.418) 0:00:00.418 ****** 2025-09-19 17:27:00.326663 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:00.326674 | orchestrator | 2025-09-19 17:27:00.326685 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-19 17:27:00.326695 | orchestrator | Friday 19 September 2025 17:26:56 +0000 (0:00:00.636) 0:00:01.054 ****** 2025-09-19 17:27:00.326706 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:00.326716 | orchestrator | 2025-09-19 17:27:00.326727 | orchestrator | TASK [Create report output directory] ****************************************** 2025-09-19 17:27:00.326738 | orchestrator | Friday 19 September 2025 17:26:57 +0000 (0:00:00.238) 0:00:01.293 ****** 2025-09-19 17:27:00.326748 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:00.326759 | orchestrator | 2025-09-19 17:27:00.326769 | orchestrator | TASK [Define report vars] ****************************************************** 2025-09-19 17:27:00.326780 | orchestrator | Friday 19 September 2025 17:26:58 +0000 (0:00:00.968) 0:00:02.261 ****** 2025-09-19 17:27:00.326816 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:00.326828 | orchestrator | 2025-09-19 17:27:00.326838 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 17:27:00.326849 | orchestrator | Friday 19 September 2025 17:26:58 +0000 (0:00:00.119) 0:00:02.381 ****** 2025-09-19 17:27:00.326860 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:00.326870 | orchestrator | 2025-09-19 17:27:00.326881 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 17:27:00.326892 | orchestrator | Friday 19 September 2025 17:26:58 +0000 (0:00:00.133) 0:00:02.515 ****** 2025-09-19 17:27:00.326903 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:00.326914 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:00.326925 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:00.326935 | orchestrator | 2025-09-19 17:27:00.326946 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-09-19 17:27:00.326956 | orchestrator | Friday 19 September 2025 17:26:58 +0000 (0:00:00.302) 0:00:02.818 ****** 2025-09-19 17:27:00.326967 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:00.326977 | orchestrator | 2025-09-19 17:27:00.326989 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-09-19 17:27:00.327000 | orchestrator | Friday 19 September 2025 17:26:58 +0000 (0:00:00.166) 0:00:02.984 ****** 2025-09-19 17:27:00.327010 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:00.327024 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:00.327036 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:00.327048 | orchestrator | 2025-09-19 17:27:00.327060 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-09-19 17:27:00.327072 | orchestrator | Friday 19 September 2025 17:26:59 +0000 (0:00:00.294) 0:00:03.278 ****** 2025-09-19 17:27:00.327084 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:00.327096 | orchestrator | 2025-09-19 17:27:00.327108 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:27:00.327120 | orchestrator | Friday 19 September 2025 17:26:59 +0000 (0:00:00.529) 0:00:03.807 ****** 2025-09-19 17:27:00.327132 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:00.327145 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:00.327157 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:00.327169 | orchestrator | 2025-09-19 17:27:00.327212 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-09-19 17:27:00.327231 | orchestrator | Friday 19 September 2025 17:27:00 +0000 (0:00:00.459) 0:00:04.267 ****** 2025-09-19 17:27:00.327269 | orchestrator | skipping: [testbed-node-3] => (item={'id': '97366c87980577257fd5bd4ca9182f781fba86eaa6585dbbbff42573c19a6afa', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.327288 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd9a89e7e60896002c7355c1c0427ac723abe47b8d558fb9e105a85628cfc2d90', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.327304 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5ccf0d98dcf560d3a4be36787488f2029d3f864be072a2dc21e9ea1bcf3bbac', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 17:27:00.327319 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a01aebe9702504e59b0a94de98ca55b45091d500cea1c42d1d35813b2962a98b', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 17:27:00.327332 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fe3f0fcb5b13cf076c17835ce38d1bf31c1b8bcad5d7279867ccd31a16f81e19', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 17:27:00.327380 | orchestrator | skipping: [testbed-node-3] => (item={'id': '07d56b3ead5de2b35266062871d76d639df0f27120f39fa59a629e69ce509193', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.327397 | orchestrator | skipping: [testbed-node-3] => (item={'id': '577f21c4592087e08e899cf37f0c6ebb48227902f8fcbc128d7a42a5eae80894', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.327408 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3ccee0a362a5a820333281e87f5b46acc294b875e8090fd52cbb7a4e87e9a3ac', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 17:27:00.327420 | orchestrator | skipping: [testbed-node-3] => (item={'id': '74d837910d03452d20516a207077bfa1b05cccc13a9d450b83c4c2ab854fbf21', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 17:27:00.327431 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'cfa9b1c619ffda8e600446934fa52bcecde91b2fc8fb4a445cf9fe705200766a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 17:27:00.327447 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9a8712cd3f43307c6971dd257b3f3dd0e9c4553aac0cc92b3b13a638082b4dd6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 17:27:00.327459 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c3ce72a21d76629b652356f143afe0219a566f2266f5d526e528697a51b4554e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 17:27:00.327470 | orchestrator | ok: [testbed-node-3] => (item={'id': '5de02ac13881f3cdedbe46f6eb948562d3947473737e2bf232dced4f13d7be25', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:00.327482 | orchestrator | ok: [testbed-node-3] => (item={'id': '4793d9f20b152cd05e76e09bcc2ff40261b928fdf0f406005681562d6ba28cd5', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:00.327493 | orchestrator | skipping: [testbed-node-3] => (item={'id': '80dbec4a8875e5a683aae37fd749874afd70dba760583a77d2be2d139e2649e6', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-19 17:27:00.327503 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3798fbc346aeb82797177bffdfc4f56fd145d533deb1eb840ca18f9ba38a53e0', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.327515 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e987799d750c7db50a33862ca7ffbb894797642646d72ec709f5e96ba28200e', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:00.327525 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4dee96b5f8779272db18604a3fcdb3247abd133189f6c88c6622d71757f722e3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.327536 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b1adaf80eccbc78a84df3168bc51dc2f048b63b1b18b5884769d6ce1efbdbfaf', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:00.327555 | orchestrator | skipping: [testbed-node-4] => (item={'id': '076fecd172526a1f96527dc79c287dc6a9ee1c984c0a531c2e82654098942f63', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 17:27:00.327566 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c359b6d20e49abcdbb73621506b9210e40c1683b22af07ba538ad0d55878379e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:00.327585 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8b6ebc64a7ba33338b17ce1832853bfcaf8e20ff8da5ed8739b138bec7a71ead', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 17:27:00.488057 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0660f1509b633d97fea3378391e6db23bb835378ca7db3b3152c12a3abdb10e9', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:00.488155 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4198ec45b19ace708137ce1c30a9375de08216bf0e38d9892a9ac30d5ef54284', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 17:27:00.488174 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b455036463c8491914124e9d4f08b3a7fa184466161d656f44d0a5db300b6e9b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-19 17:27:00.488267 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ef65ae49af522721c366deaa8aa5ff2491a8021713f4719a9a31a03426ab4104', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.488309 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a1d290685994360bd066609c505d8b8a739d9476c427ee7b6534267535c48a4c', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.488322 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b6013721a087f4b399ed94c18ba4876d585cb665533e892d69e16285eeef5fd9', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 17:27:00.488334 | orchestrator | skipping: [testbed-node-4] => (item={'id': '994f41bfd58813f1bc4764414f6f96841c12ce6d3d3a7c4635cf4558c95ea7c1', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 17:27:00.488345 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7025af283e6cc4a2c51dd7ba6651416d61a8f17c2b7ab70445e77b776ae0a61a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 17:27:00.488357 | orchestrator | skipping: [testbed-node-4] => (item={'id': '185f7f9d3407fe383b79749e9a0776b0cecc9ca9ecfa6db1d175677aa96a1fc1', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 17:27:00.488368 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4994ddc23fb4210da2e53281fbd9a6a99e4788f598e5e529caed542c88a9d9d7', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 17:27:00.488379 | orchestrator | ok: [testbed-node-4] => (item={'id': '40ff486803346389f393d81678d3ca04445e101d380a1f8fdf418d705a639a7a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:00.488410 | orchestrator | ok: [testbed-node-4] => (item={'id': '80551b0e07d8fb3f56fd7ac8b5c71213761f766cd68ea1b4f017697330424e3b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:00.488422 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0c932e28f75ee524bc1a37ca868b695875090f2d49e91cb763140825f5ee62fb', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-19 17:27:00.488434 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd743249f996ba410ea162f716d0ce634a404f822ef235193ec0b6ffcb8ce45b3', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:00.488445 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e7ee523d5766bfd1bb9dc1b1bb041cffb2bef602b10d0d3f71175853f4dfd273', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:00.488476 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e3300dbdf027c4d9ceb10639e27a51aac208f6a73328ba9cf67ece94bcda6a55', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:00.488488 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bcb641f7aff1d1bf0117e8ff0d4b409957fb7392fcdae4be79f8616695d0d7aa', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:00.488499 | orchestrator | skipping: [testbed-node-4] => (item={'id': '217f2e6f629278d0ec40ab36024d717b348c86e96308bd6f81dfa0b6d6563c28', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-19 17:27:00.488510 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cbd5ec7a2343b07af4c6f0c6fd7ceebf57236bf052b64611838ef561b13a041e', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.488521 | orchestrator | skipping: [testbed-node-5] => (item={'id': '906d467a483b046b161e9686304953d8f29e7e3cd0429e5db9febac0f603086f', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-09-19 17:27:00.488532 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8bcaf55bf054d054c7a2a019a084d26283889416100a7434c4c78e6e8688184f', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-09-19 17:27:00.488543 | orchestrator | skipping: [testbed-node-5] => (item={'id': '016685c1cc092bdae17291012699c9b20d8b66752591f1f067f0e21e806c6176', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-09-19 17:27:00.488554 | orchestrator | skipping: [testbed-node-5] => (item={'id': '63c46b1441ed35179ef8e9d898126835e906454259eb689b7ba6d10b4078b972', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 11 minutes (healthy)'})  2025-09-19 17:27:00.488566 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff04afe29669a9c530c33f92c0b86384eb1a4b4bd72d0db6bc479386ece5a851', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.488584 | orchestrator | skipping: [testbed-node-5] => (item={'id': '86ea9163557d2a7b5bc4487dba0e51f6b4f6bc8d5689783ec67fbb9f1ae3fc0d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-09-19 17:27:00.488603 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fac7218a3bcc50f881cb1bb11443b8af9327d566e6065dfa1a8af97468f6b4be', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-09-19 17:27:00.488618 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4879f33eedf2d1f6363c900d8ccae3bef4f801f835ac6f90b78a6696a032299a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-09-19 17:27:00.488630 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b992b1f0c7913b9b0e0dbea1253ee3d4df6d454ab025c75bce3f2be1df10b848', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-09-19 17:27:00.488643 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fcc9259f1419e71a24ef111d0589ee3c629aa5179ced2a4d3300b0ad4a71da20', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-09-19 17:27:00.488656 | orchestrator | skipping: [testbed-node-5] => (item={'id': '012efe0b0c395f9e1598230625ad54b062366d5ecf46090c5cbbef2719dea979', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-09-19 17:27:00.488675 | orchestrator | ok: [testbed-node-5] => (item={'id': 'cac8cb9bf851fcbb31b424bd63a0417ff331afb41118b86b78033ca5e5dafa28', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:08.513908 | orchestrator | ok: [testbed-node-5] => (item={'id': 'aa47c31314f82cc2e098b11343bec71befea9d0f26d5f7863bd2b02164d95784', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-09-19 17:27:08.514011 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6e501e3b8add484828fef4f199e4eb3655753217ad68f61ed325480ae77a4ad3', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-09-19 17:27:08.514110 | orchestrator | skipping: [testbed-node-5] => (item={'id': '25a3abe138f9bd244126f74e73c9259b8b4b0b63bf2bc963382d647c0182302a', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:08.514133 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a059e17db142a48e71e9aa712fd4852ceb90accfa9b2224f9df4edba467a804a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-09-19 17:27:08.514171 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9f7323bc4e0be02dbd5a5f0b1bef988bf5d3f9c1466a7d24739660aef1c49a4e', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:08.514247 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3de9d911f4961f581570f770c48dd247ae9e564f59d1c235575279c60cf0d602', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-09-19 17:27:08.514266 | orchestrator | skipping: [testbed-node-5] => (item={'id': '098ea824de19dbe901d305396387307683d31e5fb033fc3bf144ed855eec3926', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-09-19 17:27:08.514285 | orchestrator | 2025-09-19 17:27:08.514305 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-09-19 17:27:08.514325 | orchestrator | Friday 19 September 2025 17:27:00 +0000 (0:00:00.460) 0:00:04.727 ****** 2025-09-19 17:27:08.514345 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.514391 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.514403 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.514415 | orchestrator | 2025-09-19 17:27:08.514428 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-09-19 17:27:08.514441 | orchestrator | Friday 19 September 2025 17:27:00 +0000 (0:00:00.293) 0:00:05.021 ****** 2025-09-19 17:27:08.514453 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.514467 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:08.514479 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:08.514491 | orchestrator | 2025-09-19 17:27:08.514504 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-09-19 17:27:08.514516 | orchestrator | Friday 19 September 2025 17:27:01 +0000 (0:00:00.284) 0:00:05.306 ****** 2025-09-19 17:27:08.514528 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.514541 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.514553 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.514565 | orchestrator | 2025-09-19 17:27:08.514578 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:27:08.514591 | orchestrator | Friday 19 September 2025 17:27:01 +0000 (0:00:00.468) 0:00:05.775 ****** 2025-09-19 17:27:08.514604 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.514616 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.514628 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.514641 | orchestrator | 2025-09-19 17:27:08.514653 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-09-19 17:27:08.514665 | orchestrator | Friday 19 September 2025 17:27:01 +0000 (0:00:00.296) 0:00:06.071 ****** 2025-09-19 17:27:08.514678 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-09-19 17:27:08.514692 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-09-19 17:27:08.514703 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.514716 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-09-19 17:27:08.514729 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-09-19 17:27:08.514742 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:08.514754 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-09-19 17:27:08.514767 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-09-19 17:27:08.514779 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:08.514789 | orchestrator | 2025-09-19 17:27:08.514800 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-09-19 17:27:08.514811 | orchestrator | Friday 19 September 2025 17:27:02 +0000 (0:00:00.329) 0:00:06.401 ****** 2025-09-19 17:27:08.514822 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.514832 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.514843 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.514854 | orchestrator | 2025-09-19 17:27:08.514884 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 17:27:08.514895 | orchestrator | Friday 19 September 2025 17:27:02 +0000 (0:00:00.318) 0:00:06.719 ****** 2025-09-19 17:27:08.514906 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.514917 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:08.514928 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:08.514938 | orchestrator | 2025-09-19 17:27:08.514949 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-09-19 17:27:08.514960 | orchestrator | Friday 19 September 2025 17:27:03 +0000 (0:00:00.458) 0:00:07.178 ****** 2025-09-19 17:27:08.514970 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.514981 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:08.514991 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:08.515002 | orchestrator | 2025-09-19 17:27:08.515013 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-09-19 17:27:08.515031 | orchestrator | Friday 19 September 2025 17:27:03 +0000 (0:00:00.298) 0:00:07.477 ****** 2025-09-19 17:27:08.515042 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515053 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.515063 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.515074 | orchestrator | 2025-09-19 17:27:08.515085 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:27:08.515096 | orchestrator | Friday 19 September 2025 17:27:03 +0000 (0:00:00.300) 0:00:07.777 ****** 2025-09-19 17:27:08.515106 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515117 | orchestrator | 2025-09-19 17:27:08.515127 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:27:08.515138 | orchestrator | Friday 19 September 2025 17:27:03 +0000 (0:00:00.240) 0:00:08.018 ****** 2025-09-19 17:27:08.515148 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515159 | orchestrator | 2025-09-19 17:27:08.515195 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:27:08.515207 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.229) 0:00:08.247 ****** 2025-09-19 17:27:08.515218 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515229 | orchestrator | 2025-09-19 17:27:08.515239 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:08.515250 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.238) 0:00:08.485 ****** 2025-09-19 17:27:08.515261 | orchestrator | 2025-09-19 17:27:08.515271 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:08.515282 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.064) 0:00:08.550 ****** 2025-09-19 17:27:08.515293 | orchestrator | 2025-09-19 17:27:08.515303 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:08.515314 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.064) 0:00:08.614 ****** 2025-09-19 17:27:08.515324 | orchestrator | 2025-09-19 17:27:08.515335 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:27:08.515346 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.236) 0:00:08.851 ****** 2025-09-19 17:27:08.515356 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515367 | orchestrator | 2025-09-19 17:27:08.515378 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-09-19 17:27:08.515388 | orchestrator | Friday 19 September 2025 17:27:04 +0000 (0:00:00.257) 0:00:09.109 ****** 2025-09-19 17:27:08.515399 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515410 | orchestrator | 2025-09-19 17:27:08.515420 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:27:08.515431 | orchestrator | Friday 19 September 2025 17:27:05 +0000 (0:00:00.255) 0:00:09.364 ****** 2025-09-19 17:27:08.515442 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515453 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.515464 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.515474 | orchestrator | 2025-09-19 17:27:08.515485 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-09-19 17:27:08.515496 | orchestrator | Friday 19 September 2025 17:27:05 +0000 (0:00:00.285) 0:00:09.650 ****** 2025-09-19 17:27:08.515507 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515517 | orchestrator | 2025-09-19 17:27:08.515528 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-09-19 17:27:08.515539 | orchestrator | Friday 19 September 2025 17:27:05 +0000 (0:00:00.224) 0:00:09.874 ****** 2025-09-19 17:27:08.515549 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-19 17:27:08.515560 | orchestrator | 2025-09-19 17:27:08.515571 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-09-19 17:27:08.515581 | orchestrator | Friday 19 September 2025 17:27:07 +0000 (0:00:01.645) 0:00:11.520 ****** 2025-09-19 17:27:08.515592 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515610 | orchestrator | 2025-09-19 17:27:08.515623 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-09-19 17:27:08.515641 | orchestrator | Friday 19 September 2025 17:27:07 +0000 (0:00:00.124) 0:00:11.645 ****** 2025-09-19 17:27:08.515661 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515679 | orchestrator | 2025-09-19 17:27:08.515697 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-09-19 17:27:08.515715 | orchestrator | Friday 19 September 2025 17:27:07 +0000 (0:00:00.296) 0:00:11.942 ****** 2025-09-19 17:27:08.515732 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:08.515751 | orchestrator | 2025-09-19 17:27:08.515770 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-09-19 17:27:08.515790 | orchestrator | Friday 19 September 2025 17:27:07 +0000 (0:00:00.110) 0:00:12.052 ****** 2025-09-19 17:27:08.515809 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515822 | orchestrator | 2025-09-19 17:27:08.515833 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:27:08.515843 | orchestrator | Friday 19 September 2025 17:27:08 +0000 (0:00:00.122) 0:00:12.174 ****** 2025-09-19 17:27:08.515854 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:08.515865 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:08.515875 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:08.515886 | orchestrator | 2025-09-19 17:27:08.515897 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-09-19 17:27:08.515916 | orchestrator | Friday 19 September 2025 17:27:08 +0000 (0:00:00.492) 0:00:12.666 ****** 2025-09-19 17:27:21.002376 | orchestrator | changed: [testbed-node-3] 2025-09-19 17:27:21.002502 | orchestrator | changed: [testbed-node-5] 2025-09-19 17:27:21.002517 | orchestrator | changed: [testbed-node-4] 2025-09-19 17:27:21.002529 | orchestrator | 2025-09-19 17:27:21.002542 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-09-19 17:27:21.002554 | orchestrator | Friday 19 September 2025 17:27:10 +0000 (0:00:02.421) 0:00:15.087 ****** 2025-09-19 17:27:21.002565 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.002576 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.002587 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.002598 | orchestrator | 2025-09-19 17:27:21.002609 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-09-19 17:27:21.002620 | orchestrator | Friday 19 September 2025 17:27:11 +0000 (0:00:00.283) 0:00:15.371 ****** 2025-09-19 17:27:21.002631 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.002641 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.002652 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.002663 | orchestrator | 2025-09-19 17:27:21.002674 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-09-19 17:27:21.002684 | orchestrator | Friday 19 September 2025 17:27:11 +0000 (0:00:00.472) 0:00:15.844 ****** 2025-09-19 17:27:21.002695 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:21.002706 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:21.002716 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:21.002727 | orchestrator | 2025-09-19 17:27:21.002737 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-09-19 17:27:21.002748 | orchestrator | Friday 19 September 2025 17:27:12 +0000 (0:00:00.488) 0:00:16.333 ****** 2025-09-19 17:27:21.002758 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.002769 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.002779 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.002790 | orchestrator | 2025-09-19 17:27:21.002801 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-09-19 17:27:21.002812 | orchestrator | Friday 19 September 2025 17:27:12 +0000 (0:00:00.335) 0:00:16.668 ****** 2025-09-19 17:27:21.002822 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:21.002832 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:21.002843 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:21.002853 | orchestrator | 2025-09-19 17:27:21.002864 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-09-19 17:27:21.002899 | orchestrator | Friday 19 September 2025 17:27:12 +0000 (0:00:00.273) 0:00:16.942 ****** 2025-09-19 17:27:21.002912 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:21.002924 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:21.002936 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:21.002948 | orchestrator | 2025-09-19 17:27:21.002961 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-09-19 17:27:21.002973 | orchestrator | Friday 19 September 2025 17:27:13 +0000 (0:00:00.303) 0:00:17.245 ****** 2025-09-19 17:27:21.002986 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.002998 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.003011 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.003023 | orchestrator | 2025-09-19 17:27:21.003035 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-09-19 17:27:21.003048 | orchestrator | Friday 19 September 2025 17:27:13 +0000 (0:00:00.716) 0:00:17.962 ****** 2025-09-19 17:27:21.003060 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.003073 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.003085 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.003097 | orchestrator | 2025-09-19 17:27:21.003109 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-09-19 17:27:21.003121 | orchestrator | Friday 19 September 2025 17:27:14 +0000 (0:00:00.479) 0:00:18.441 ****** 2025-09-19 17:27:21.003134 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.003146 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.003159 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.003172 | orchestrator | 2025-09-19 17:27:21.003210 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-09-19 17:27:21.003223 | orchestrator | Friday 19 September 2025 17:27:14 +0000 (0:00:00.303) 0:00:18.744 ****** 2025-09-19 17:27:21.003235 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:21.003248 | orchestrator | skipping: [testbed-node-4] 2025-09-19 17:27:21.003260 | orchestrator | skipping: [testbed-node-5] 2025-09-19 17:27:21.003273 | orchestrator | 2025-09-19 17:27:21.003285 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-09-19 17:27:21.003296 | orchestrator | Friday 19 September 2025 17:27:14 +0000 (0:00:00.293) 0:00:19.038 ****** 2025-09-19 17:27:21.003306 | orchestrator | ok: [testbed-node-3] 2025-09-19 17:27:21.003317 | orchestrator | ok: [testbed-node-4] 2025-09-19 17:27:21.003327 | orchestrator | ok: [testbed-node-5] 2025-09-19 17:27:21.003338 | orchestrator | 2025-09-19 17:27:21.003348 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-09-19 17:27:21.003359 | orchestrator | Friday 19 September 2025 17:27:15 +0000 (0:00:00.538) 0:00:19.576 ****** 2025-09-19 17:27:21.003370 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:21.003380 | orchestrator | 2025-09-19 17:27:21.003391 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-09-19 17:27:21.003401 | orchestrator | Friday 19 September 2025 17:27:15 +0000 (0:00:00.252) 0:00:19.829 ****** 2025-09-19 17:27:21.003412 | orchestrator | skipping: [testbed-node-3] 2025-09-19 17:27:21.003422 | orchestrator | 2025-09-19 17:27:21.003433 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-09-19 17:27:21.003443 | orchestrator | Friday 19 September 2025 17:27:15 +0000 (0:00:00.252) 0:00:20.082 ****** 2025-09-19 17:27:21.003454 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:21.003465 | orchestrator | 2025-09-19 17:27:21.003476 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-09-19 17:27:21.003533 | orchestrator | Friday 19 September 2025 17:27:17 +0000 (0:00:01.660) 0:00:21.742 ****** 2025-09-19 17:27:21.003545 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:21.003556 | orchestrator | 2025-09-19 17:27:21.003567 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-09-19 17:27:21.003577 | orchestrator | Friday 19 September 2025 17:27:17 +0000 (0:00:00.253) 0:00:21.996 ****** 2025-09-19 17:27:21.003614 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:21.003625 | orchestrator | 2025-09-19 17:27:21.003636 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:21.003647 | orchestrator | Friday 19 September 2025 17:27:18 +0000 (0:00:00.236) 0:00:22.232 ****** 2025-09-19 17:27:21.003657 | orchestrator | 2025-09-19 17:27:21.003668 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:21.003678 | orchestrator | Friday 19 September 2025 17:27:18 +0000 (0:00:00.064) 0:00:22.297 ****** 2025-09-19 17:27:21.003688 | orchestrator | 2025-09-19 17:27:21.003699 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-09-19 17:27:21.003710 | orchestrator | Friday 19 September 2025 17:27:18 +0000 (0:00:00.066) 0:00:22.363 ****** 2025-09-19 17:27:21.003720 | orchestrator | 2025-09-19 17:27:21.003730 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-09-19 17:27:21.003741 | orchestrator | Friday 19 September 2025 17:27:18 +0000 (0:00:00.067) 0:00:22.431 ****** 2025-09-19 17:27:21.003752 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-19 17:27:21.003762 | orchestrator | 2025-09-19 17:27:21.003773 | orchestrator | TASK [Print report file information] ******************************************* 2025-09-19 17:27:21.003783 | orchestrator | Friday 19 September 2025 17:27:19 +0000 (0:00:01.569) 0:00:24.000 ****** 2025-09-19 17:27:21.003794 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-09-19 17:27:21.003804 | orchestrator |  "msg": [ 2025-09-19 17:27:21.003815 | orchestrator |  "Validator run completed.", 2025-09-19 17:27:21.003826 | orchestrator |  "You can find the report file here:", 2025-09-19 17:27:21.003842 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-09-19T17:26:56+00:00-report.json", 2025-09-19 17:27:21.003854 | orchestrator |  "on the following host:", 2025-09-19 17:27:21.003864 | orchestrator |  "testbed-manager" 2025-09-19 17:27:21.003875 | orchestrator |  ] 2025-09-19 17:27:21.003886 | orchestrator | } 2025-09-19 17:27:21.003897 | orchestrator | 2025-09-19 17:27:21.003907 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:27:21.003919 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-09-19 17:27:21.003932 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 17:27:21.003942 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-19 17:27:21.003953 | orchestrator | 2025-09-19 17:27:21.003964 | orchestrator | 2025-09-19 17:27:21.003974 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:27:21.003985 | orchestrator | Friday 19 September 2025 17:27:20 +0000 (0:00:00.835) 0:00:24.836 ****** 2025-09-19 17:27:21.003995 | orchestrator | =============================================================================== 2025-09-19 17:27:21.004006 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.42s 2025-09-19 17:27:21.004016 | orchestrator | Aggregate test results step one ----------------------------------------- 1.66s 2025-09-19 17:27:21.004026 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.65s 2025-09-19 17:27:21.004037 | orchestrator | Write report file ------------------------------------------------------- 1.57s 2025-09-19 17:27:21.004047 | orchestrator | Create report output directory ------------------------------------------ 0.97s 2025-09-19 17:27:21.004058 | orchestrator | Print report file information ------------------------------------------- 0.84s 2025-09-19 17:27:21.004068 | orchestrator | Prepare test data ------------------------------------------------------- 0.72s 2025-09-19 17:27:21.004079 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-09-19 17:27:21.004096 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2025-09-19 17:27:21.004107 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.53s 2025-09-19 17:27:21.004117 | orchestrator | Prepare test data ------------------------------------------------------- 0.49s 2025-09-19 17:27:21.004128 | orchestrator | Fail if count of encrypted OSDs does not match -------------------------- 0.49s 2025-09-19 17:27:21.004138 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.48s 2025-09-19 17:27:21.004148 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.47s 2025-09-19 17:27:21.004159 | orchestrator | Set test result to passed if count matches ------------------------------ 0.47s 2025-09-19 17:27:21.004169 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2025-09-19 17:27:21.004198 | orchestrator | Prepare test data ------------------------------------------------------- 0.46s 2025-09-19 17:27:21.004209 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.46s 2025-09-19 17:27:21.004220 | orchestrator | Flush handlers ---------------------------------------------------------- 0.37s 2025-09-19 17:27:21.004231 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.34s 2025-09-19 17:27:21.288051 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-09-19 17:27:21.297287 | orchestrator | + set -e 2025-09-19 17:27:21.297354 | orchestrator | + source /opt/manager-vars.sh 2025-09-19 17:27:21.297378 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-19 17:27:21.297389 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-19 17:27:21.297400 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-19 17:27:21.297411 | orchestrator | ++ CEPH_VERSION=reef 2025-09-19 17:27:21.297422 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-19 17:27:21.297433 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-19 17:27:21.297444 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 17:27:21.297455 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 17:27:21.297465 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-19 17:27:21.297476 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-19 17:27:21.297487 | orchestrator | ++ export ARA=false 2025-09-19 17:27:21.297854 | orchestrator | ++ ARA=false 2025-09-19 17:27:21.297872 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-19 17:27:21.297883 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-19 17:27:21.297893 | orchestrator | ++ export TEMPEST=false 2025-09-19 17:27:21.297904 | orchestrator | ++ TEMPEST=false 2025-09-19 17:27:21.297914 | orchestrator | ++ export IS_ZUUL=true 2025-09-19 17:27:21.297925 | orchestrator | ++ IS_ZUUL=true 2025-09-19 17:27:21.297936 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 17:27:21.297947 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.107 2025-09-19 17:27:21.297957 | orchestrator | ++ export EXTERNAL_API=false 2025-09-19 17:27:21.297968 | orchestrator | ++ EXTERNAL_API=false 2025-09-19 17:27:21.297979 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-19 17:27:21.297989 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-19 17:27:21.298000 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-19 17:27:21.298010 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-19 17:27:21.298103 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-19 17:27:21.298115 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-19 17:27:21.298136 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-19 17:27:21.298147 | orchestrator | + source /etc/os-release 2025-09-19 17:27:21.298158 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.3 LTS' 2025-09-19 17:27:21.298169 | orchestrator | ++ NAME=Ubuntu 2025-09-19 17:27:21.298201 | orchestrator | ++ VERSION_ID=24.04 2025-09-19 17:27:21.298213 | orchestrator | ++ VERSION='24.04.3 LTS (Noble Numbat)' 2025-09-19 17:27:21.298223 | orchestrator | ++ VERSION_CODENAME=noble 2025-09-19 17:27:21.298235 | orchestrator | ++ ID=ubuntu 2025-09-19 17:27:21.298245 | orchestrator | ++ ID_LIKE=debian 2025-09-19 17:27:21.298256 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-09-19 17:27:21.298266 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-09-19 17:27:21.298277 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-09-19 17:27:21.298288 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-09-19 17:27:21.298300 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-09-19 17:27:21.298310 | orchestrator | ++ LOGO=ubuntu-logo 2025-09-19 17:27:21.298346 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-09-19 17:27:21.298358 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-09-19 17:27:21.298384 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 17:27:21.331252 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-09-19 17:27:43.849414 | orchestrator | 2025-09-19 17:27:43.849548 | orchestrator | # Status of Elasticsearch 2025-09-19 17:27:43.849566 | orchestrator | 2025-09-19 17:27:43.849579 | orchestrator | + pushd /opt/configuration/contrib 2025-09-19 17:27:43.849591 | orchestrator | + echo 2025-09-19 17:27:43.849603 | orchestrator | + echo '# Status of Elasticsearch' 2025-09-19 17:27:43.849614 | orchestrator | + echo 2025-09-19 17:27:43.849625 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-09-19 17:27:44.061216 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-09-19 17:27:44.061613 | orchestrator | 2025-09-19 17:27:44.061643 | orchestrator | + echo 2025-09-19 17:27:44.061656 | orchestrator | + echo '# Status of MariaDB' 2025-09-19 17:27:44.061669 | orchestrator | # Status of MariaDB 2025-09-19 17:27:44.061936 | orchestrator | 2025-09-19 17:27:44.061958 | orchestrator | + echo 2025-09-19 17:27:44.061969 | orchestrator | + MARIADB_USER=root_shard_0 2025-09-19 17:27:44.061981 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-09-19 17:27:44.125720 | orchestrator | Reading package lists... 2025-09-19 17:27:44.453828 | orchestrator | Building dependency tree... 2025-09-19 17:27:44.454471 | orchestrator | Reading state information... 2025-09-19 17:27:44.818655 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-09-19 17:27:44.818757 | orchestrator | bc set to manually installed. 2025-09-19 17:27:44.818772 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-09-19 17:27:45.435321 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-09-19 17:27:45.436039 | orchestrator | 2025-09-19 17:27:45.436088 | orchestrator | # Status of Prometheus 2025-09-19 17:27:45.436103 | orchestrator | 2025-09-19 17:27:45.436117 | orchestrator | + echo 2025-09-19 17:27:45.436129 | orchestrator | + echo '# Status of Prometheus' 2025-09-19 17:27:45.436143 | orchestrator | + echo 2025-09-19 17:27:45.436152 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-09-19 17:27:45.494276 | orchestrator | Unauthorized 2025-09-19 17:27:45.497936 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-09-19 17:27:45.556791 | orchestrator | Unauthorized 2025-09-19 17:27:45.559938 | orchestrator | 2025-09-19 17:27:45.559989 | orchestrator | # Status of RabbitMQ 2025-09-19 17:27:45.560002 | orchestrator | 2025-09-19 17:27:45.560013 | orchestrator | + echo 2025-09-19 17:27:45.560024 | orchestrator | + echo '# Status of RabbitMQ' 2025-09-19 17:27:45.560035 | orchestrator | + echo 2025-09-19 17:27:45.560047 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-09-19 17:27:45.996079 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-09-19 17:27:46.004645 | orchestrator | 2025-09-19 17:27:46.004706 | orchestrator | # Status of Redis 2025-09-19 17:27:46.004719 | orchestrator | 2025-09-19 17:27:46.004733 | orchestrator | + echo 2025-09-19 17:27:46.004751 | orchestrator | + echo '# Status of Redis' 2025-09-19 17:27:46.004771 | orchestrator | + echo 2025-09-19 17:27:46.004784 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-09-19 17:27:46.008959 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001631s;;;0.000000;10.000000 2025-09-19 17:27:46.009475 | orchestrator | 2025-09-19 17:27:46.009499 | orchestrator | + popd 2025-09-19 17:27:46.009511 | orchestrator | + echo 2025-09-19 17:27:46.009522 | orchestrator | + echo '# Create backup of MariaDB database' 2025-09-19 17:27:46.009534 | orchestrator | # Create backup of MariaDB database 2025-09-19 17:27:46.009545 | orchestrator | 2025-09-19 17:27:46.009556 | orchestrator | + echo 2025-09-19 17:27:46.009593 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-09-19 17:27:47.975130 | orchestrator | 2025-09-19 17:27:47 | INFO  | Task 89ff17a6-2997-4240-94d9-b26ea48052a2 (mariadb_backup) was prepared for execution. 2025-09-19 17:27:47.975251 | orchestrator | 2025-09-19 17:27:47 | INFO  | It takes a moment until task 89ff17a6-2997-4240-94d9-b26ea48052a2 (mariadb_backup) has been started and output is visible here. 2025-09-19 17:30:28.371503 | orchestrator | 2025-09-19 17:30:28.371593 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-19 17:30:28.371602 | orchestrator | 2025-09-19 17:30:28.371609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-19 17:30:28.371616 | orchestrator | Friday 19 September 2025 17:27:51 +0000 (0:00:00.179) 0:00:00.179 ****** 2025-09-19 17:30:28.371623 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:30:28.371630 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:30:28.371637 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:30:28.371643 | orchestrator | 2025-09-19 17:30:28.371649 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-19 17:30:28.371656 | orchestrator | Friday 19 September 2025 17:27:52 +0000 (0:00:00.317) 0:00:00.496 ****** 2025-09-19 17:30:28.371662 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-19 17:30:28.371669 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-19 17:30:28.371675 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-19 17:30:28.371681 | orchestrator | 2025-09-19 17:30:28.371687 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-19 17:30:28.371693 | orchestrator | 2025-09-19 17:30:28.371699 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-19 17:30:28.371705 | orchestrator | Friday 19 September 2025 17:27:52 +0000 (0:00:00.595) 0:00:01.091 ****** 2025-09-19 17:30:28.371711 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-19 17:30:28.371718 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-19 17:30:28.371724 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-19 17:30:28.371730 | orchestrator | 2025-09-19 17:30:28.371736 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-19 17:30:28.371742 | orchestrator | Friday 19 September 2025 17:27:53 +0000 (0:00:00.409) 0:00:01.500 ****** 2025-09-19 17:30:28.371749 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-19 17:30:28.371755 | orchestrator | 2025-09-19 17:30:28.371761 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-09-19 17:30:28.371780 | orchestrator | Friday 19 September 2025 17:27:53 +0000 (0:00:00.511) 0:00:02.011 ****** 2025-09-19 17:30:28.371786 | orchestrator | ok: [testbed-node-1] 2025-09-19 17:30:28.371792 | orchestrator | ok: [testbed-node-0] 2025-09-19 17:30:28.371798 | orchestrator | ok: [testbed-node-2] 2025-09-19 17:30:28.371804 | orchestrator | 2025-09-19 17:30:28.371810 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-09-19 17:30:28.371816 | orchestrator | Friday 19 September 2025 17:27:56 +0000 (0:00:03.010) 0:00:05.022 ****** 2025-09-19 17:30:28.371822 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:30:28.371829 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:30:28.371835 | orchestrator | 2025-09-19 17:30:28.371841 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-09-19 17:30:28.371847 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-19 17:30:28.371853 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-09-19 17:30:28.371859 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-19 17:30:28.371865 | orchestrator | mariadb_bootstrap_restart 2025-09-19 17:30:28.371871 | orchestrator | changed: [testbed-node-0] 2025-09-19 17:30:28.371878 | orchestrator | 2025-09-19 17:30:28.371884 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-19 17:30:28.371905 | orchestrator | skipping: no hosts matched 2025-09-19 17:30:28.371912 | orchestrator | 2025-09-19 17:30:28.371918 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-19 17:30:28.371924 | orchestrator | skipping: no hosts matched 2025-09-19 17:30:28.371930 | orchestrator | 2025-09-19 17:30:28.371936 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-19 17:30:28.371942 | orchestrator | skipping: no hosts matched 2025-09-19 17:30:28.371948 | orchestrator | 2025-09-19 17:30:28.371954 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-19 17:30:28.371960 | orchestrator | 2025-09-19 17:30:28.371966 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-19 17:30:28.371972 | orchestrator | Friday 19 September 2025 17:30:27 +0000 (0:02:30.751) 0:02:35.774 ****** 2025-09-19 17:30:28.371978 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:30:28.371984 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:30:28.371990 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:30:28.371996 | orchestrator | 2025-09-19 17:30:28.372002 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-19 17:30:28.372007 | orchestrator | Friday 19 September 2025 17:30:27 +0000 (0:00:00.311) 0:02:36.086 ****** 2025-09-19 17:30:28.372013 | orchestrator | skipping: [testbed-node-0] 2025-09-19 17:30:28.372019 | orchestrator | skipping: [testbed-node-1] 2025-09-19 17:30:28.372025 | orchestrator | skipping: [testbed-node-2] 2025-09-19 17:30:28.372031 | orchestrator | 2025-09-19 17:30:28.372037 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:30:28.372044 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-19 17:30:28.372051 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 17:30:28.372058 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-19 17:30:28.372064 | orchestrator | 2025-09-19 17:30:28.372071 | orchestrator | 2025-09-19 17:30:28.372078 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:30:28.372085 | orchestrator | Friday 19 September 2025 17:30:27 +0000 (0:00:00.217) 0:02:36.303 ****** 2025-09-19 17:30:28.372092 | orchestrator | =============================================================================== 2025-09-19 17:30:28.372110 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 150.75s 2025-09-19 17:30:28.372117 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.01s 2025-09-19 17:30:28.372124 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.60s 2025-09-19 17:30:28.372131 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.51s 2025-09-19 17:30:28.372138 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.41s 2025-09-19 17:30:28.372145 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-09-19 17:30:28.372152 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.31s 2025-09-19 17:30:28.372158 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.22s 2025-09-19 17:30:28.658450 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-09-19 17:30:28.667871 | orchestrator | + set -e 2025-09-19 17:30:28.668001 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-19 17:30:28.668016 | orchestrator | ++ export INTERACTIVE=false 2025-09-19 17:30:28.668028 | orchestrator | ++ INTERACTIVE=false 2025-09-19 17:30:28.668038 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-19 17:30:28.668050 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-19 17:30:28.668068 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-09-19 17:30:28.669129 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-09-19 17:30:28.672251 | orchestrator | 2025-09-19 17:30:28.672303 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-19 17:30:28.672315 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-19 17:30:28.672326 | orchestrator | + export OS_CLOUD=admin 2025-09-19 17:30:28.672337 | orchestrator | + OS_CLOUD=admin 2025-09-19 17:30:28.672348 | orchestrator | + echo 2025-09-19 17:30:28.672931 | orchestrator | # OpenStack endpoints 2025-09-19 17:30:28.672952 | orchestrator | 2025-09-19 17:30:28.672964 | orchestrator | + echo '# OpenStack endpoints' 2025-09-19 17:30:28.672977 | orchestrator | + echo 2025-09-19 17:30:28.672989 | orchestrator | + openstack endpoint list 2025-09-19 17:30:32.377192 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 17:30:32.378391 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-09-19 17:30:32.378471 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 17:30:32.378485 | orchestrator | | 0dbed3cfdf48424d97c8b68961e5c0c2 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-09-19 17:30:32.378496 | orchestrator | | 1f12124119924ce48f0ccc30281f9113 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-09-19 17:30:32.378507 | orchestrator | | 2701c6dddbdd41458ab45b85737460a6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-09-19 17:30:32.378518 | orchestrator | | 32bbb09e71104cf9acb79e41be89f7e0 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-09-19 17:30:32.378529 | orchestrator | | 40c288ac9b924e49b58a915f15110723 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 17:30:32.378539 | orchestrator | | 4201c982c730417c8ae0b3c95c81de61 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-09-19 17:30:32.378550 | orchestrator | | 57595142fbc24d368c9c902b4c22f051 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-09-19 17:30:32.378560 | orchestrator | | 5b8bf38c84fa40d0a46d77be85342e49 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-09-19 17:30:32.378591 | orchestrator | | 78089e161de5456db7c057226d3a7102 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-09-19 17:30:32.378603 | orchestrator | | 789d7598cf754bc59007b452e39f2576 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-09-19 17:30:32.378614 | orchestrator | | 7a17d81d57174d6b985e313ee86d4934 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-09-19 17:30:32.378624 | orchestrator | | 804f82a1a529420d912f3f3b5ea36428 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-09-19 17:30:32.378635 | orchestrator | | 96623919637c4382be1a9698f52c767e | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-09-19 17:30:32.378646 | orchestrator | | bdb57ecd5f804f7d8db19fe992c62de7 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-09-19 17:30:32.378656 | orchestrator | | bf78763e8de8488e9ce7081a860c4ea7 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 17:30:32.378687 | orchestrator | | c4fbabece8be40f1a940760569fb90b1 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-09-19 17:30:32.378699 | orchestrator | | ca41800f02ca441cae2afd61937799f9 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-09-19 17:30:32.378709 | orchestrator | | d0292610980d4bfd840cc3bf03534392 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-09-19 17:30:32.378720 | orchestrator | | d0b46911a1474773985e798c5c91bf7a | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-09-19 17:30:32.378731 | orchestrator | | dabc06a5c1084d139780ee4db5741b25 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-09-19 17:30:32.378770 | orchestrator | | e466972f076d4eec8b0be9a65d2d734f | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-09-19 17:30:32.378782 | orchestrator | | fbe20ccd7b0c4d6ea617b2629f2f3d68 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-09-19 17:30:32.378799 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-09-19 17:30:32.596968 | orchestrator | 2025-09-19 17:30:32.597066 | orchestrator | # Cinder 2025-09-19 17:30:32.597080 | orchestrator | 2025-09-19 17:30:32.597092 | orchestrator | + echo 2025-09-19 17:30:32.597103 | orchestrator | + echo '# Cinder' 2025-09-19 17:30:32.597114 | orchestrator | + echo 2025-09-19 17:30:32.597125 | orchestrator | + openstack volume service list 2025-09-19 17:30:35.221065 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:35.221170 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 17:30:35.221184 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:35.221196 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T17:30:26.000000 | 2025-09-19 17:30:35.221207 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T17:30:26.000000 | 2025-09-19 17:30:35.221277 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T17:30:27.000000 | 2025-09-19 17:30:35.221288 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-09-19T17:30:26.000000 | 2025-09-19 17:30:35.221299 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-09-19T17:30:27.000000 | 2025-09-19 17:30:35.221310 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-09-19T17:30:28.000000 | 2025-09-19 17:30:35.221321 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-09-19T17:30:26.000000 | 2025-09-19 17:30:35.221331 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-09-19T17:30:26.000000 | 2025-09-19 17:30:35.221342 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-09-19T17:30:27.000000 | 2025-09-19 17:30:35.221353 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:35.461718 | orchestrator | 2025-09-19 17:30:35.461818 | orchestrator | # Neutron 2025-09-19 17:30:35.461832 | orchestrator | 2025-09-19 17:30:35.461844 | orchestrator | + echo 2025-09-19 17:30:35.461856 | orchestrator | + echo '# Neutron' 2025-09-19 17:30:35.461867 | orchestrator | + echo 2025-09-19 17:30:35.461878 | orchestrator | + openstack network agent list 2025-09-19 17:30:38.926856 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 17:30:38.926969 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-09-19 17:30:38.926985 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 17:30:38.926997 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927008 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927019 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927030 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927041 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927052 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-09-19 17:30:38.927062 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 17:30:38.927073 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 17:30:38.927084 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-09-19 17:30:38.927095 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-09-19 17:30:39.184807 | orchestrator | + openstack network service provider list 2025-09-19 17:30:41.675459 | orchestrator | +---------------+------+---------+ 2025-09-19 17:30:41.675560 | orchestrator | | Service Type | Name | Default | 2025-09-19 17:30:41.675574 | orchestrator | +---------------+------+---------+ 2025-09-19 17:30:41.675585 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-09-19 17:30:41.675595 | orchestrator | +---------------+------+---------+ 2025-09-19 17:30:41.912833 | orchestrator | 2025-09-19 17:30:41.912932 | orchestrator | # Nova 2025-09-19 17:30:41.912946 | orchestrator | 2025-09-19 17:30:41.912958 | orchestrator | + echo 2025-09-19 17:30:41.912970 | orchestrator | + echo '# Nova' 2025-09-19 17:30:41.912981 | orchestrator | + echo 2025-09-19 17:30:41.912992 | orchestrator | + openstack compute service list 2025-09-19 17:30:44.659776 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:44.659886 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-09-19 17:30:44.659913 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:44.659922 | orchestrator | | ad21b52d-ab1f-4405-98b9-7366234410ce | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-09-19T17:30:37.000000 | 2025-09-19 17:30:44.659939 | orchestrator | | 8153b5cf-7100-4ffd-b9ba-b1b85afd7b60 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-09-19T17:30:41.000000 | 2025-09-19 17:30:44.659947 | orchestrator | | 7bc8eb0c-8fa5-4c36-a4b2-fbd3c142b008 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-09-19T17:30:42.000000 | 2025-09-19 17:30:44.659955 | orchestrator | | 54087182-5ac5-4390-af91-fe6b735c39ac | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-09-19T17:30:34.000000 | 2025-09-19 17:30:44.659981 | orchestrator | | 06704702-bfd2-4b45-96da-75044f653797 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-09-19T17:30:36.000000 | 2025-09-19 17:30:44.659989 | orchestrator | | 949440fa-1c93-439f-97ba-cb11f648ae6b | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-09-19T17:30:36.000000 | 2025-09-19 17:30:44.659997 | orchestrator | | 0a102a59-35fd-4e5e-aeda-ab2e61ef5153 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-09-19T17:30:42.000000 | 2025-09-19 17:30:44.660005 | orchestrator | | 917575d9-5f09-40e8-99d3-7aad42cc0e4d | nova-compute | testbed-node-4 | nova | enabled | up | 2025-09-19T17:30:42.000000 | 2025-09-19 17:30:44.660013 | orchestrator | | 71a711c8-ff32-47e5-b3d4-1e9c2f9ffd92 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-09-19T17:30:43.000000 | 2025-09-19 17:30:44.660021 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-09-19 17:30:44.909311 | orchestrator | + openstack hypervisor list 2025-09-19 17:30:47.581056 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 17:30:47.581157 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-09-19 17:30:47.581171 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 17:30:47.581183 | orchestrator | | 965d250b-e05b-4096-a777-3f98b1297c08 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-09-19 17:30:47.581193 | orchestrator | | 62305991-116a-470c-8b37-eaa2e1f61ba8 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-09-19 17:30:47.581204 | orchestrator | | 18defaad-5e95-4117-b08a-044edfb43420 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-09-19 17:30:47.581245 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-09-19 17:30:47.840039 | orchestrator | 2025-09-19 17:30:47.840137 | orchestrator | # Run OpenStack test play 2025-09-19 17:30:47.840157 | orchestrator | 2025-09-19 17:30:47.840175 | orchestrator | + echo 2025-09-19 17:30:47.840192 | orchestrator | + echo '# Run OpenStack test play' 2025-09-19 17:30:47.840209 | orchestrator | + echo 2025-09-19 17:30:47.840264 | orchestrator | + osism apply --environment openstack test 2025-09-19 17:30:49.734885 | orchestrator | 2025-09-19 17:30:49 | INFO  | Trying to run play test in environment openstack 2025-09-19 17:30:49.809404 | orchestrator | 2025-09-19 17:30:49 | INFO  | Task 63c410ea-fad3-4a67-9345-d230b407f473 (test) was prepared for execution. 2025-09-19 17:30:49.809479 | orchestrator | 2025-09-19 17:30:49 | INFO  | It takes a moment until task 63c410ea-fad3-4a67-9345-d230b407f473 (test) has been started and output is visible here. 2025-09-19 17:37:45.475750 | orchestrator | 2025-09-19 17:37:45.475898 | orchestrator | PLAY [Create test project] ***************************************************** 2025-09-19 17:37:45.475912 | orchestrator | 2025-09-19 17:37:45.475922 | orchestrator | TASK [Create test domain] ****************************************************** 2025-09-19 17:37:45.475932 | orchestrator | Friday 19 September 2025 17:30:53 +0000 (0:00:00.077) 0:00:00.077 ****** 2025-09-19 17:37:45.475941 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.475950 | orchestrator | 2025-09-19 17:37:45.475959 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-09-19 17:37:45.475968 | orchestrator | Friday 19 September 2025 17:30:57 +0000 (0:00:03.591) 0:00:03.668 ****** 2025-09-19 17:37:45.475976 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476020 | orchestrator | 2025-09-19 17:37:45.476029 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-09-19 17:37:45.476038 | orchestrator | Friday 19 September 2025 17:31:01 +0000 (0:00:04.041) 0:00:07.710 ****** 2025-09-19 17:37:45.476047 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476056 | orchestrator | 2025-09-19 17:37:45.476064 | orchestrator | TASK [Create test project] ***************************************************** 2025-09-19 17:37:45.476092 | orchestrator | Friday 19 September 2025 17:31:07 +0000 (0:00:06.241) 0:00:13.952 ****** 2025-09-19 17:37:45.476101 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476109 | orchestrator | 2025-09-19 17:37:45.476118 | orchestrator | TASK [Create test user] ******************************************************** 2025-09-19 17:37:45.476127 | orchestrator | Friday 19 September 2025 17:31:11 +0000 (0:00:03.926) 0:00:17.878 ****** 2025-09-19 17:37:45.476135 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476144 | orchestrator | 2025-09-19 17:37:45.476152 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-09-19 17:37:45.476161 | orchestrator | Friday 19 September 2025 17:31:15 +0000 (0:00:04.288) 0:00:22.167 ****** 2025-09-19 17:37:45.476182 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-09-19 17:37:45.476191 | orchestrator | changed: [localhost] => (item=member) 2025-09-19 17:37:45.476200 | orchestrator | changed: [localhost] => (item=creator) 2025-09-19 17:37:45.476209 | orchestrator | 2025-09-19 17:37:45.476217 | orchestrator | TASK [Create test server group] ************************************************ 2025-09-19 17:37:45.476226 | orchestrator | Friday 19 September 2025 17:31:27 +0000 (0:00:11.678) 0:00:33.846 ****** 2025-09-19 17:37:45.476234 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476243 | orchestrator | 2025-09-19 17:37:45.476251 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-09-19 17:37:45.476259 | orchestrator | Friday 19 September 2025 17:31:31 +0000 (0:00:04.195) 0:00:38.041 ****** 2025-09-19 17:37:45.476268 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476276 | orchestrator | 2025-09-19 17:37:45.476285 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-09-19 17:37:45.476293 | orchestrator | Friday 19 September 2025 17:31:36 +0000 (0:00:05.225) 0:00:43.266 ****** 2025-09-19 17:37:45.476302 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476312 | orchestrator | 2025-09-19 17:37:45.476321 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-09-19 17:37:45.476331 | orchestrator | Friday 19 September 2025 17:31:41 +0000 (0:00:04.094) 0:00:47.361 ****** 2025-09-19 17:37:45.476341 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476350 | orchestrator | 2025-09-19 17:37:45.476360 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-09-19 17:37:45.476369 | orchestrator | Friday 19 September 2025 17:31:45 +0000 (0:00:04.725) 0:00:52.087 ****** 2025-09-19 17:37:45.476379 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476389 | orchestrator | 2025-09-19 17:37:45.476399 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-09-19 17:37:45.476408 | orchestrator | Friday 19 September 2025 17:31:49 +0000 (0:00:04.080) 0:00:56.167 ****** 2025-09-19 17:37:45.476418 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476427 | orchestrator | 2025-09-19 17:37:45.476437 | orchestrator | TASK [Create test network topology] ******************************************** 2025-09-19 17:37:45.476446 | orchestrator | Friday 19 September 2025 17:31:53 +0000 (0:00:03.803) 0:00:59.971 ****** 2025-09-19 17:37:45.476456 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476466 | orchestrator | 2025-09-19 17:37:45.476475 | orchestrator | TASK [Create test instances] *************************************************** 2025-09-19 17:37:45.476485 | orchestrator | Friday 19 September 2025 17:32:09 +0000 (0:00:15.636) 0:01:15.607 ****** 2025-09-19 17:37:45.476494 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 17:37:45.476504 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 17:37:45.476513 | orchestrator | 2025-09-19 17:37:45.476523 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 17:37:45.476532 | orchestrator | 2025-09-19 17:37:45.476542 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 17:37:45.476551 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 17:37:45.476561 | orchestrator | 2025-09-19 17:37:45.476571 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 17:37:45.476587 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 17:37:45.476596 | orchestrator | 2025-09-19 17:37:45.476604 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-09-19 17:37:45.476613 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 17:37:45.476621 | orchestrator | 2025-09-19 17:37:45.476630 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-09-19 17:37:45.476638 | orchestrator | Friday 19 September 2025 17:36:23 +0000 (0:04:13.998) 0:05:29.605 ****** 2025-09-19 17:37:45.476646 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 17:37:45.476655 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 17:37:45.476663 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 17:37:45.476671 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 17:37:45.476680 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 17:37:45.476688 | orchestrator | 2025-09-19 17:37:45.476697 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-09-19 17:37:45.476706 | orchestrator | Friday 19 September 2025 17:36:46 +0000 (0:00:22.947) 0:05:52.553 ****** 2025-09-19 17:37:45.476730 | orchestrator | changed: [localhost] => (item=test) 2025-09-19 17:37:45.476739 | orchestrator | changed: [localhost] => (item=test-1) 2025-09-19 17:37:45.476748 | orchestrator | changed: [localhost] => (item=test-2) 2025-09-19 17:37:45.476756 | orchestrator | changed: [localhost] => (item=test-3) 2025-09-19 17:37:45.476764 | orchestrator | changed: [localhost] => (item=test-4) 2025-09-19 17:37:45.476773 | orchestrator | 2025-09-19 17:37:45.476781 | orchestrator | TASK [Create test volume] ****************************************************** 2025-09-19 17:37:45.476802 | orchestrator | Friday 19 September 2025 17:37:19 +0000 (0:00:33.350) 0:06:25.904 ****** 2025-09-19 17:37:45.476810 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476819 | orchestrator | 2025-09-19 17:37:45.476836 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-09-19 17:37:45.476845 | orchestrator | Friday 19 September 2025 17:37:26 +0000 (0:00:06.913) 0:06:32.817 ****** 2025-09-19 17:37:45.476854 | orchestrator | changed: [localhost] 2025-09-19 17:37:45.476862 | orchestrator | 2025-09-19 17:37:45.476871 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-09-19 17:37:45.476879 | orchestrator | Friday 19 September 2025 17:37:40 +0000 (0:00:13.471) 0:06:46.289 ****** 2025-09-19 17:37:45.476888 | orchestrator | ok: [localhost] 2025-09-19 17:37:45.476896 | orchestrator | 2025-09-19 17:37:45.476909 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-09-19 17:37:45.476918 | orchestrator | Friday 19 September 2025 17:37:45 +0000 (0:00:05.184) 0:06:51.474 ****** 2025-09-19 17:37:45.476926 | orchestrator | ok: [localhost] => { 2025-09-19 17:37:45.476934 | orchestrator |  "msg": "192.168.112.119" 2025-09-19 17:37:45.476943 | orchestrator | } 2025-09-19 17:37:45.476952 | orchestrator | 2025-09-19 17:37:45.476960 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-19 17:37:45.476969 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-19 17:37:45.476979 | orchestrator | 2025-09-19 17:37:45.477003 | orchestrator | 2025-09-19 17:37:45.477012 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-19 17:37:45.477021 | orchestrator | Friday 19 September 2025 17:37:45 +0000 (0:00:00.036) 0:06:51.510 ****** 2025-09-19 17:37:45.477029 | orchestrator | =============================================================================== 2025-09-19 17:37:45.477037 | orchestrator | Create test instances ------------------------------------------------- 254.00s 2025-09-19 17:37:45.477046 | orchestrator | Add tag to instances --------------------------------------------------- 33.35s 2025-09-19 17:37:45.477054 | orchestrator | Add metadata to instances ---------------------------------------------- 22.95s 2025-09-19 17:37:45.477062 | orchestrator | Create test network topology ------------------------------------------- 15.64s 2025-09-19 17:37:45.477071 | orchestrator | Attach test volume ----------------------------------------------------- 13.47s 2025-09-19 17:37:45.477085 | orchestrator | Add member roles to user test ------------------------------------------ 11.68s 2025-09-19 17:37:45.477094 | orchestrator | Create test volume ------------------------------------------------------ 6.91s 2025-09-19 17:37:45.477102 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.24s 2025-09-19 17:37:45.477111 | orchestrator | Create ssh security group ----------------------------------------------- 5.23s 2025-09-19 17:37:45.477119 | orchestrator | Create floating ip address ---------------------------------------------- 5.18s 2025-09-19 17:37:45.477127 | orchestrator | Create icmp security group ---------------------------------------------- 4.73s 2025-09-19 17:37:45.477136 | orchestrator | Create test user -------------------------------------------------------- 4.29s 2025-09-19 17:37:45.477144 | orchestrator | Create test server group ------------------------------------------------ 4.20s 2025-09-19 17:37:45.477153 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.09s 2025-09-19 17:37:45.477161 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.08s 2025-09-19 17:37:45.477169 | orchestrator | Create test-admin user -------------------------------------------------- 4.04s 2025-09-19 17:37:45.477178 | orchestrator | Create test project ----------------------------------------------------- 3.93s 2025-09-19 17:37:45.477186 | orchestrator | Create test keypair ----------------------------------------------------- 3.80s 2025-09-19 17:37:45.477195 | orchestrator | Create test domain ------------------------------------------------------ 3.59s 2025-09-19 17:37:45.477203 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-09-19 17:37:45.733945 | orchestrator | + server_list 2025-09-19 17:37:45.734150 | orchestrator | + openstack --os-cloud test server list 2025-09-19 17:37:49.517241 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-19 17:37:49.517349 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-09-19 17:37:49.517364 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-19 17:37:49.517376 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | auto_allocated_network=10.42.0.34, 192.168.112.192 | N/A (booted from volume) | SCS-1L-1 | 2025-09-19 17:37:49.517410 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | auto_allocated_network=10.42.0.23, 192.168.112.126 | N/A (booted from volume) | SCS-1L-1 | 2025-09-19 17:37:49.517422 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.111 | N/A (booted from volume) | SCS-1L-1 | 2025-09-19 17:37:49.517433 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | auto_allocated_network=10.42.0.51, 192.168.112.188 | N/A (booted from volume) | SCS-1L-1 | 2025-09-19 17:37:49.517444 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | auto_allocated_network=10.42.0.4, 192.168.112.119 | N/A (booted from volume) | SCS-1L-1 | 2025-09-19 17:37:49.517455 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------------------+----------+ 2025-09-19 17:37:49.747411 | orchestrator | + openstack --os-cloud test server show test 2025-09-19 17:37:52.873328 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:52.873469 | orchestrator | | Field | Value | 2025-09-19 17:37:52.873507 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:52.873518 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 17:37:52.873530 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 17:37:52.873547 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 17:37:52.873563 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-09-19 17:37:52.873581 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 17:37:52.873598 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 17:37:52.873627 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 17:37:52.873638 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 17:37:52.873660 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 17:37:52.873671 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 17:37:52.873680 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 17:37:52.873690 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 17:37:52.873700 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 17:37:52.873709 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 17:37:52.873719 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 17:37:52.873729 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T17:32:54.000000 | 2025-09-19 17:37:52.873747 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 17:37:52.873763 | orchestrator | | accessIPv4 | | 2025-09-19 17:37:52.873777 | orchestrator | | accessIPv6 | | 2025-09-19 17:37:52.873787 | orchestrator | | addresses | auto_allocated_network=10.42.0.4, 192.168.112.119 | 2025-09-19 17:37:52.873797 | orchestrator | | config_drive | | 2025-09-19 17:37:52.873807 | orchestrator | | created | 2025-09-19T17:32:18Z | 2025-09-19 17:37:52.873817 | orchestrator | | description | None | 2025-09-19 17:37:52.873827 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 17:37:52.873839 | orchestrator | | hostId | 1c174c3c4420767518be0f5eedbba2070e287f42f6b3c6f78d783a9f | 2025-09-19 17:37:52.873850 | orchestrator | | host_status | None | 2025-09-19 17:37:52.873869 | orchestrator | | id | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | 2025-09-19 17:37:52.873886 | orchestrator | | image | N/A (booted from volume) | 2025-09-19 17:37:52.873902 | orchestrator | | key_name | test | 2025-09-19 17:37:52.873914 | orchestrator | | locked | False | 2025-09-19 17:37:52.873926 | orchestrator | | locked_reason | None | 2025-09-19 17:37:52.873937 | orchestrator | | name | test | 2025-09-19 17:37:52.873948 | orchestrator | | pinned_availability_zone | None | 2025-09-19 17:37:52.873959 | orchestrator | | progress | 0 | 2025-09-19 17:37:52.873970 | orchestrator | | project_id | 60cc1c51c6254b318e2f2cbf719c9333 | 2025-09-19 17:37:52.874009 | orchestrator | | properties | hostname='test' | 2025-09-19 17:37:52.874106 | orchestrator | | security_groups | name='ssh' | 2025-09-19 17:37:52.874120 | orchestrator | | | name='icmp' | 2025-09-19 17:37:52.874132 | orchestrator | | server_groups | None | 2025-09-19 17:37:52.874143 | orchestrator | | status | ACTIVE | 2025-09-19 17:37:52.874154 | orchestrator | | tags | test | 2025-09-19 17:37:52.874165 | orchestrator | | trusted_image_certificates | None | 2025-09-19 17:37:52.874188 | orchestrator | | updated | 2025-09-19T17:36:28Z | 2025-09-19 17:37:52.874199 | orchestrator | | user_id | 9a747d7dee6840288ddc9920c17769c2 | 2025-09-19 17:37:52.874209 | orchestrator | | volumes_attached | delete_on_termination='True', id='cc1617a1-d741-496d-a59f-2efea1e8b8b5' | 2025-09-19 17:37:52.874225 | orchestrator | | | delete_on_termination='False', id='e203a1da-37e0-44a6-af85-ff4454dce129' | 2025-09-19 17:37:52.878422 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:53.129347 | orchestrator | + openstack --os-cloud test server show test-1 2025-09-19 17:37:56.267435 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:56.267558 | orchestrator | | Field | Value | 2025-09-19 17:37:56.267575 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:56.267587 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 17:37:56.267598 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 17:37:56.267609 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 17:37:56.267620 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-09-19 17:37:56.267655 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 17:37:56.267666 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 17:37:56.267695 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 17:37:56.267707 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 17:37:56.267724 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 17:37:56.267735 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 17:37:56.267746 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 17:37:56.267758 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 17:37:56.267769 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 17:37:56.267787 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 17:37:56.267798 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 17:37:56.267809 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T17:33:46.000000 | 2025-09-19 17:37:56.267827 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 17:37:56.267840 | orchestrator | | accessIPv4 | | 2025-09-19 17:37:56.267857 | orchestrator | | accessIPv6 | | 2025-09-19 17:37:56.267868 | orchestrator | | addresses | auto_allocated_network=10.42.0.51, 192.168.112.188 | 2025-09-19 17:37:56.267879 | orchestrator | | config_drive | | 2025-09-19 17:37:56.267890 | orchestrator | | created | 2025-09-19T17:33:13Z | 2025-09-19 17:37:56.267901 | orchestrator | | description | None | 2025-09-19 17:37:56.267919 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 17:37:56.267930 | orchestrator | | hostId | 8bfe537ff9fc6c6535e7266fa44344b5aa93b5cfbbd030b866b549d1 | 2025-09-19 17:37:56.267941 | orchestrator | | host_status | None | 2025-09-19 17:37:56.267959 | orchestrator | | id | 27757632-bcf8-4008-8bac-5114182ea4a7 | 2025-09-19 17:37:56.268021 | orchestrator | | image | N/A (booted from volume) | 2025-09-19 17:37:56.268035 | orchestrator | | key_name | test | 2025-09-19 17:37:56.268048 | orchestrator | | locked | False | 2025-09-19 17:37:56.268061 | orchestrator | | locked_reason | None | 2025-09-19 17:37:56.268073 | orchestrator | | name | test-1 | 2025-09-19 17:37:56.268093 | orchestrator | | pinned_availability_zone | None | 2025-09-19 17:37:56.268105 | orchestrator | | progress | 0 | 2025-09-19 17:37:56.268118 | orchestrator | | project_id | 60cc1c51c6254b318e2f2cbf719c9333 | 2025-09-19 17:37:56.268131 | orchestrator | | properties | hostname='test-1' | 2025-09-19 17:37:56.268151 | orchestrator | | security_groups | name='ssh' | 2025-09-19 17:37:56.268165 | orchestrator | | | name='icmp' | 2025-09-19 17:37:56.268178 | orchestrator | | server_groups | None | 2025-09-19 17:37:56.268191 | orchestrator | | status | ACTIVE | 2025-09-19 17:37:56.268203 | orchestrator | | tags | test | 2025-09-19 17:37:56.268223 | orchestrator | | trusted_image_certificates | None | 2025-09-19 17:37:56.268235 | orchestrator | | updated | 2025-09-19T17:36:32Z | 2025-09-19 17:37:56.268248 | orchestrator | | user_id | 9a747d7dee6840288ddc9920c17769c2 | 2025-09-19 17:37:56.268261 | orchestrator | | volumes_attached | delete_on_termination='True', id='5a960007-d246-40e8-8109-d368f6d0ae29' | 2025-09-19 17:37:56.271576 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:56.518723 | orchestrator | + openstack --os-cloud test server show test-2 2025-09-19 17:37:59.445340 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:59.445452 | orchestrator | | Field | Value | 2025-09-19 17:37:59.445468 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:59.445481 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 17:37:59.445515 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 17:37:59.445528 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 17:37:59.445540 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-09-19 17:37:59.445579 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 17:37:59.445592 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 17:37:59.445624 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 17:37:59.445637 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 17:37:59.445654 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 17:37:59.445665 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 17:37:59.445685 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 17:37:59.445697 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 17:37:59.445709 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 17:37:59.445721 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 17:37:59.445733 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 17:37:59.445745 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T17:34:41.000000 | 2025-09-19 17:37:59.445765 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 17:37:59.445777 | orchestrator | | accessIPv4 | | 2025-09-19 17:37:59.445792 | orchestrator | | accessIPv6 | | 2025-09-19 17:37:59.445804 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.111 | 2025-09-19 17:37:59.445822 | orchestrator | | config_drive | | 2025-09-19 17:37:59.445834 | orchestrator | | created | 2025-09-19T17:34:08Z | 2025-09-19 17:37:59.445846 | orchestrator | | description | None | 2025-09-19 17:37:59.445858 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 17:37:59.445873 | orchestrator | | hostId | 0bc79836371aa1f19f78b880a423d79f886cf8933bea3c72b80d7daf | 2025-09-19 17:37:59.445887 | orchestrator | | host_status | None | 2025-09-19 17:37:59.445909 | orchestrator | | id | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | 2025-09-19 17:37:59.445923 | orchestrator | | image | N/A (booted from volume) | 2025-09-19 17:37:59.445940 | orchestrator | | key_name | test | 2025-09-19 17:37:59.445959 | orchestrator | | locked | False | 2025-09-19 17:37:59.446001 | orchestrator | | locked_reason | None | 2025-09-19 17:37:59.446015 | orchestrator | | name | test-2 | 2025-09-19 17:37:59.446106 | orchestrator | | pinned_availability_zone | None | 2025-09-19 17:37:59.446120 | orchestrator | | progress | 0 | 2025-09-19 17:37:59.446134 | orchestrator | | project_id | 60cc1c51c6254b318e2f2cbf719c9333 | 2025-09-19 17:37:59.446148 | orchestrator | | properties | hostname='test-2' | 2025-09-19 17:37:59.446173 | orchestrator | | security_groups | name='ssh' | 2025-09-19 17:37:59.446187 | orchestrator | | | name='icmp' | 2025-09-19 17:37:59.446212 | orchestrator | | server_groups | None | 2025-09-19 17:37:59.446227 | orchestrator | | status | ACTIVE | 2025-09-19 17:37:59.446242 | orchestrator | | tags | test | 2025-09-19 17:37:59.446254 | orchestrator | | trusted_image_certificates | None | 2025-09-19 17:37:59.446266 | orchestrator | | updated | 2025-09-19T17:36:37Z | 2025-09-19 17:37:59.446278 | orchestrator | | user_id | 9a747d7dee6840288ddc9920c17769c2 | 2025-09-19 17:37:59.446290 | orchestrator | | volumes_attached | delete_on_termination='True', id='6653556a-4b94-4c74-9128-ed41dc83fb00' | 2025-09-19 17:37:59.449325 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:37:59.687042 | orchestrator | + openstack --os-cloud test server show test-3 2025-09-19 17:38:02.647649 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:02.647770 | orchestrator | | Field | Value | 2025-09-19 17:38:02.647786 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:02.647798 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 17:38:02.647809 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 17:38:02.647821 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 17:38:02.647832 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-09-19 17:38:02.647843 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 17:38:02.647854 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 17:38:02.647886 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 17:38:02.647921 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 17:38:02.647938 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 17:38:02.647950 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 17:38:02.648010 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 17:38:02.648022 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 17:38:02.648033 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 17:38:02.648045 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 17:38:02.648056 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 17:38:02.648067 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T17:35:27.000000 | 2025-09-19 17:38:02.648085 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 17:38:02.648111 | orchestrator | | accessIPv4 | | 2025-09-19 17:38:02.648128 | orchestrator | | accessIPv6 | | 2025-09-19 17:38:02.648139 | orchestrator | | addresses | auto_allocated_network=10.42.0.23, 192.168.112.126 | 2025-09-19 17:38:02.648150 | orchestrator | | config_drive | | 2025-09-19 17:38:02.648161 | orchestrator | | created | 2025-09-19T17:35:02Z | 2025-09-19 17:38:02.648172 | orchestrator | | description | None | 2025-09-19 17:38:02.648184 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 17:38:02.648207 | orchestrator | | hostId | 8bfe537ff9fc6c6535e7266fa44344b5aa93b5cfbbd030b866b549d1 | 2025-09-19 17:38:02.648219 | orchestrator | | host_status | None | 2025-09-19 17:38:02.648256 | orchestrator | | id | 81208bfb-6a09-40fd-ac3f-a554c1536141 | 2025-09-19 17:38:02.648268 | orchestrator | | image | N/A (booted from volume) | 2025-09-19 17:38:02.648284 | orchestrator | | key_name | test | 2025-09-19 17:38:02.648295 | orchestrator | | locked | False | 2025-09-19 17:38:02.648307 | orchestrator | | locked_reason | None | 2025-09-19 17:38:02.648318 | orchestrator | | name | test-3 | 2025-09-19 17:38:02.648329 | orchestrator | | pinned_availability_zone | None | 2025-09-19 17:38:02.648341 | orchestrator | | progress | 0 | 2025-09-19 17:38:02.648352 | orchestrator | | project_id | 60cc1c51c6254b318e2f2cbf719c9333 | 2025-09-19 17:38:02.648369 | orchestrator | | properties | hostname='test-3' | 2025-09-19 17:38:02.648388 | orchestrator | | security_groups | name='ssh' | 2025-09-19 17:38:02.648399 | orchestrator | | | name='icmp' | 2025-09-19 17:38:02.648415 | orchestrator | | server_groups | None | 2025-09-19 17:38:02.648426 | orchestrator | | status | ACTIVE | 2025-09-19 17:38:02.648437 | orchestrator | | tags | test | 2025-09-19 17:38:02.648448 | orchestrator | | trusted_image_certificates | None | 2025-09-19 17:38:02.648459 | orchestrator | | updated | 2025-09-19T17:36:41Z | 2025-09-19 17:38:02.648470 | orchestrator | | user_id | 9a747d7dee6840288ddc9920c17769c2 | 2025-09-19 17:38:02.648488 | orchestrator | | volumes_attached | delete_on_termination='True', id='cb43fdc4-174d-46a5-a703-fa344af7a39c' | 2025-09-19 17:38:02.652262 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:02.894245 | orchestrator | + openstack --os-cloud test server show test-4 2025-09-19 17:38:06.300472 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:06.300589 | orchestrator | | Field | Value | 2025-09-19 17:38:06.300628 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:06.300643 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-09-19 17:38:06.300658 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-09-19 17:38:06.300672 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-09-19 17:38:06.300686 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-09-19 17:38:06.300698 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-09-19 17:38:06.300736 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-09-19 17:38:06.300770 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-09-19 17:38:06.300785 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-09-19 17:38:06.300800 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-09-19 17:38:06.300821 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-09-19 17:38:06.300836 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-09-19 17:38:06.300849 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-09-19 17:38:06.300864 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-09-19 17:38:06.300878 | orchestrator | | OS-EXT-STS:task_state | None | 2025-09-19 17:38:06.300902 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-09-19 17:38:06.300917 | orchestrator | | OS-SRV-USG:launched_at | 2025-09-19T17:36:11.000000 | 2025-09-19 17:38:06.300940 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-09-19 17:38:06.301027 | orchestrator | | accessIPv4 | | 2025-09-19 17:38:06.301046 | orchestrator | | accessIPv6 | | 2025-09-19 17:38:06.301060 | orchestrator | | addresses | auto_allocated_network=10.42.0.34, 192.168.112.192 | 2025-09-19 17:38:06.301074 | orchestrator | | config_drive | | 2025-09-19 17:38:06.301089 | orchestrator | | created | 2025-09-19T17:35:45Z | 2025-09-19 17:38:06.301105 | orchestrator | | description | None | 2025-09-19 17:38:06.301132 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-09-19 17:38:06.301149 | orchestrator | | hostId | 0bc79836371aa1f19f78b880a423d79f886cf8933bea3c72b80d7daf | 2025-09-19 17:38:06.301577 | orchestrator | | host_status | None | 2025-09-19 17:38:06.301636 | orchestrator | | id | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | 2025-09-19 17:38:06.301647 | orchestrator | | image | N/A (booted from volume) | 2025-09-19 17:38:06.301656 | orchestrator | | key_name | test | 2025-09-19 17:38:06.301664 | orchestrator | | locked | False | 2025-09-19 17:38:06.301672 | orchestrator | | locked_reason | None | 2025-09-19 17:38:06.301680 | orchestrator | | name | test-4 | 2025-09-19 17:38:06.301699 | orchestrator | | pinned_availability_zone | None | 2025-09-19 17:38:06.301707 | orchestrator | | progress | 0 | 2025-09-19 17:38:06.301715 | orchestrator | | project_id | 60cc1c51c6254b318e2f2cbf719c9333 | 2025-09-19 17:38:06.301724 | orchestrator | | properties | hostname='test-4' | 2025-09-19 17:38:06.301743 | orchestrator | | security_groups | name='ssh' | 2025-09-19 17:38:06.301752 | orchestrator | | | name='icmp' | 2025-09-19 17:38:06.301760 | orchestrator | | server_groups | None | 2025-09-19 17:38:06.301768 | orchestrator | | status | ACTIVE | 2025-09-19 17:38:06.301776 | orchestrator | | tags | test | 2025-09-19 17:38:06.301784 | orchestrator | | trusted_image_certificates | None | 2025-09-19 17:38:06.301797 | orchestrator | | updated | 2025-09-19T17:36:46Z | 2025-09-19 17:38:06.301805 | orchestrator | | user_id | 9a747d7dee6840288ddc9920c17769c2 | 2025-09-19 17:38:06.301813 | orchestrator | | volumes_attached | delete_on_termination='True', id='b25efbea-4813-483a-bb7f-d0f10fc3fd6b' | 2025-09-19 17:38:06.305885 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-09-19 17:38:06.559770 | orchestrator | + server_ping 2025-09-19 17:38:06.561470 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 17:38:06.561979 | orchestrator | ++ tr -d '\r' 2025-09-19 17:38:09.387454 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:38:09.387555 | orchestrator | + ping -c3 192.168.112.188 2025-09-19 17:38:09.401932 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-19 17:38:09.402105 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.97 ms 2025-09-19 17:38:10.396934 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.52 ms 2025-09-19 17:38:11.398578 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=2.07 ms 2025-09-19 17:38:11.398695 | orchestrator | 2025-09-19 17:38:11.398711 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-19 17:38:11.398724 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 17:38:11.398735 | orchestrator | rtt min/avg/max/mdev = 2.071/4.519/8.971/3.153 ms 2025-09-19 17:38:11.398748 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:38:11.398759 | orchestrator | + ping -c3 192.168.112.111 2025-09-19 17:38:11.412794 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2025-09-19 17:38:11.412875 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=9.52 ms 2025-09-19 17:38:12.407992 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.79 ms 2025-09-19 17:38:13.409598 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=2.26 ms 2025-09-19 17:38:13.409698 | orchestrator | 2025-09-19 17:38:13.409712 | orchestrator | --- 192.168.112.111 ping statistics --- 2025-09-19 17:38:13.409725 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:38:13.409737 | orchestrator | rtt min/avg/max/mdev = 2.259/4.856/9.517/3.303 ms 2025-09-19 17:38:13.410577 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:38:13.410604 | orchestrator | + ping -c3 192.168.112.119 2025-09-19 17:38:13.423346 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2025-09-19 17:38:13.423441 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=8.21 ms 2025-09-19 17:38:14.420229 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=2.58 ms 2025-09-19 17:38:15.420386 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=2.04 ms 2025-09-19 17:38:15.420467 | orchestrator | 2025-09-19 17:38:15.420478 | orchestrator | --- 192.168.112.119 ping statistics --- 2025-09-19 17:38:15.420486 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:38:15.420494 | orchestrator | rtt min/avg/max/mdev = 2.041/4.276/8.213/2.791 ms 2025-09-19 17:38:15.420930 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:38:15.420999 | orchestrator | + ping -c3 192.168.112.192 2025-09-19 17:38:15.432002 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-09-19 17:38:15.432030 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.67 ms 2025-09-19 17:38:16.430248 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.49 ms 2025-09-19 17:38:17.431405 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.00 ms 2025-09-19 17:38:17.431698 | orchestrator | 2025-09-19 17:38:17.431724 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-09-19 17:38:17.431737 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:38:17.431750 | orchestrator | rtt min/avg/max/mdev = 1.996/3.721/6.674/2.097 ms 2025-09-19 17:38:17.431778 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:38:17.431792 | orchestrator | + ping -c3 192.168.112.126 2025-09-19 17:38:17.443715 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-09-19 17:38:17.443772 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=7.29 ms 2025-09-19 17:38:18.441142 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.70 ms 2025-09-19 17:38:19.442476 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.90 ms 2025-09-19 17:38:19.442578 | orchestrator | 2025-09-19 17:38:19.442593 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-09-19 17:38:19.442605 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:38:19.442616 | orchestrator | rtt min/avg/max/mdev = 1.897/3.962/7.291/2.376 ms 2025-09-19 17:38:19.443051 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-19 17:38:19.443078 | orchestrator | + compute_list 2025-09-19 17:38:19.443090 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 17:38:22.913543 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:22.913652 | orchestrator | | ID | Name | Status | 2025-09-19 17:38:22.913665 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:38:22.913676 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | 2025-09-19 17:38:22.913687 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | 2025-09-19 17:38:22.913698 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:23.212429 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 17:38:26.730481 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:26.730587 | orchestrator | | ID | Name | Status | 2025-09-19 17:38:26.730602 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:38:26.730613 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | 2025-09-19 17:38:26.730624 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | 2025-09-19 17:38:26.730635 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:27.012383 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 17:38:30.201517 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:30.201626 | orchestrator | | ID | Name | Status | 2025-09-19 17:38:30.201682 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:38:30.201700 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | 2025-09-19 17:38:30.201733 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:38:30.498735 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-09-19 17:38:33.582957 | orchestrator | 2025-09-19 17:38:33 | INFO  | Live migrating server 81208bfb-6a09-40fd-ac3f-a554c1536141 2025-09-19 17:38:46.538514 | orchestrator | 2025-09-19 17:38:46 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:38:49.126862 | orchestrator | 2025-09-19 17:38:49 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:38:51.554457 | orchestrator | 2025-09-19 17:38:51 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:38:53.985727 | orchestrator | 2025-09-19 17:38:53 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:38:56.269856 | orchestrator | 2025-09-19 17:38:56 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:38:58.577512 | orchestrator | 2025-09-19 17:38:58 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:39:00.830554 | orchestrator | 2025-09-19 17:39:00 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:39:03.227457 | orchestrator | 2025-09-19 17:39:03 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:39:05.501680 | orchestrator | 2025-09-19 17:39:05 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:39:07.910457 | orchestrator | 2025-09-19 17:39:07 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) completed with status ACTIVE 2025-09-19 17:39:07.910538 | orchestrator | 2025-09-19 17:39:07 | INFO  | Live migrating server 27757632-bcf8-4008-8bac-5114182ea4a7 2025-09-19 17:39:18.889953 | orchestrator | 2025-09-19 17:39:18 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:21.188780 | orchestrator | 2025-09-19 17:39:21 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:23.558713 | orchestrator | 2025-09-19 17:39:23 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:25.896370 | orchestrator | 2025-09-19 17:39:25 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:28.166318 | orchestrator | 2025-09-19 17:39:28 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:30.421717 | orchestrator | 2025-09-19 17:39:30 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:32.773653 | orchestrator | 2025-09-19 17:39:32 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:35.080454 | orchestrator | 2025-09-19 17:39:35 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:39:37.434401 | orchestrator | 2025-09-19 17:39:37 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) completed with status ACTIVE 2025-09-19 17:39:37.704325 | orchestrator | + compute_list 2025-09-19 17:39:37.704412 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 17:39:40.864775 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:39:40.864922 | orchestrator | | ID | Name | Status | 2025-09-19 17:39:40.864934 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:39:40.864941 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | 2025-09-19 17:39:40.864949 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | 2025-09-19 17:39:40.864955 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | 2025-09-19 17:39:40.864963 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | 2025-09-19 17:39:40.864970 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:39:41.163496 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 17:39:43.934359 | orchestrator | +------+--------+----------+ 2025-09-19 17:39:43.934460 | orchestrator | | ID | Name | Status | 2025-09-19 17:39:43.934474 | orchestrator | |------+--------+----------| 2025-09-19 17:39:43.934486 | orchestrator | +------+--------+----------+ 2025-09-19 17:39:44.221732 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 17:39:47.574521 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:39:47.574631 | orchestrator | | ID | Name | Status | 2025-09-19 17:39:47.574645 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:39:47.574656 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | 2025-09-19 17:39:47.574687 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:39:47.873711 | orchestrator | + server_ping 2025-09-19 17:39:47.874994 | orchestrator | ++ tr -d '\r' 2025-09-19 17:39:47.875274 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 17:39:50.766243 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:39:50.766344 | orchestrator | + ping -c3 192.168.112.188 2025-09-19 17:39:50.777089 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-19 17:39:50.777129 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=8.78 ms 2025-09-19 17:39:51.772144 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.41 ms 2025-09-19 17:39:52.773945 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.92 ms 2025-09-19 17:39:52.774108 | orchestrator | 2025-09-19 17:39:52.774125 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-19 17:39:52.774139 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:39:52.774151 | orchestrator | rtt min/avg/max/mdev = 1.923/4.371/8.782/3.125 ms 2025-09-19 17:39:52.774202 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:39:52.774216 | orchestrator | + ping -c3 192.168.112.111 2025-09-19 17:39:52.785087 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2025-09-19 17:39:52.785138 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=6.82 ms 2025-09-19 17:39:53.783353 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.60 ms 2025-09-19 17:39:54.785261 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=2.04 ms 2025-09-19 17:39:54.785356 | orchestrator | 2025-09-19 17:39:54.785371 | orchestrator | --- 192.168.112.111 ping statistics --- 2025-09-19 17:39:54.785384 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:39:54.785395 | orchestrator | rtt min/avg/max/mdev = 2.040/3.820/6.822/2.134 ms 2025-09-19 17:39:54.785407 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:39:54.785419 | orchestrator | + ping -c3 192.168.112.119 2025-09-19 17:39:54.796101 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2025-09-19 17:39:54.796142 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=6.41 ms 2025-09-19 17:39:55.793874 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=2.37 ms 2025-09-19 17:39:56.795156 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=2.26 ms 2025-09-19 17:39:56.795266 | orchestrator | 2025-09-19 17:39:56.795281 | orchestrator | --- 192.168.112.119 ping statistics --- 2025-09-19 17:39:56.795320 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 17:39:56.795332 | orchestrator | rtt min/avg/max/mdev = 2.260/3.680/6.410/1.930 ms 2025-09-19 17:39:56.795711 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:39:56.795736 | orchestrator | + ping -c3 192.168.112.192 2025-09-19 17:39:56.806194 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-09-19 17:39:56.806262 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=6.39 ms 2025-09-19 17:39:57.804300 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.56 ms 2025-09-19 17:39:58.805832 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.08 ms 2025-09-19 17:39:58.806257 | orchestrator | 2025-09-19 17:39:58.806293 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-09-19 17:39:58.806307 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:39:58.806318 | orchestrator | rtt min/avg/max/mdev = 2.082/3.677/6.386/1.925 ms 2025-09-19 17:39:58.806560 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:39:58.806583 | orchestrator | + ping -c3 192.168.112.126 2025-09-19 17:39:58.817337 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-09-19 17:39:58.817393 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=6.07 ms 2025-09-19 17:39:59.815532 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.46 ms 2025-09-19 17:40:00.816399 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.97 ms 2025-09-19 17:40:00.816501 | orchestrator | 2025-09-19 17:40:00.816515 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-09-19 17:40:00.816528 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:40:00.816539 | orchestrator | rtt min/avg/max/mdev = 1.973/3.499/6.070/1.828 ms 2025-09-19 17:40:00.816749 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-09-19 17:40:03.912965 | orchestrator | 2025-09-19 17:40:03 | INFO  | Live migrating server 2fddffd0-2cc6-4172-9014-31e19f6b2734 2025-09-19 17:40:17.260392 | orchestrator | 2025-09-19 17:40:17 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:19.634568 | orchestrator | 2025-09-19 17:40:19 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:21.977556 | orchestrator | 2025-09-19 17:40:21 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:24.346947 | orchestrator | 2025-09-19 17:40:24 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:26.713277 | orchestrator | 2025-09-19 17:40:26 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:28.967881 | orchestrator | 2025-09-19 17:40:28 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:31.243915 | orchestrator | 2025-09-19 17:40:31 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:33.562114 | orchestrator | 2025-09-19 17:40:33 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:35.840815 | orchestrator | 2025-09-19 17:40:35 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:38.143545 | orchestrator | 2025-09-19 17:40:38 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:40:40.512385 | orchestrator | 2025-09-19 17:40:40 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) completed with status ACTIVE 2025-09-19 17:40:40.808521 | orchestrator | + compute_list 2025-09-19 17:40:40.808599 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 17:40:44.076184 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:40:44.076291 | orchestrator | | ID | Name | Status | 2025-09-19 17:40:44.076305 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:40:44.076317 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | 2025-09-19 17:40:44.076329 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | 2025-09-19 17:40:44.076340 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | 2025-09-19 17:40:44.076350 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | 2025-09-19 17:40:44.076361 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | 2025-09-19 17:40:44.076372 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:40:44.356253 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 17:40:47.118318 | orchestrator | +------+--------+----------+ 2025-09-19 17:40:47.118420 | orchestrator | | ID | Name | Status | 2025-09-19 17:40:47.118435 | orchestrator | |------+--------+----------| 2025-09-19 17:40:47.118446 | orchestrator | +------+--------+----------+ 2025-09-19 17:40:47.403946 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 17:40:50.254766 | orchestrator | +------+--------+----------+ 2025-09-19 17:40:50.254866 | orchestrator | | ID | Name | Status | 2025-09-19 17:40:50.254879 | orchestrator | |------+--------+----------| 2025-09-19 17:40:50.254890 | orchestrator | +------+--------+----------+ 2025-09-19 17:40:50.552300 | orchestrator | + server_ping 2025-09-19 17:40:50.553933 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 17:40:50.555353 | orchestrator | ++ tr -d '\r' 2025-09-19 17:40:53.357642 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:40:53.357795 | orchestrator | + ping -c3 192.168.112.188 2025-09-19 17:40:53.366923 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-19 17:40:53.366981 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=6.52 ms 2025-09-19 17:40:54.364765 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.22 ms 2025-09-19 17:40:55.365840 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.48 ms 2025-09-19 17:40:55.365970 | orchestrator | 2025-09-19 17:40:55.365988 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-19 17:40:55.366001 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:40:55.366012 | orchestrator | rtt min/avg/max/mdev = 1.477/3.405/6.518/2.221 ms 2025-09-19 17:40:55.366653 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:40:55.366699 | orchestrator | + ping -c3 192.168.112.111 2025-09-19 17:40:55.375963 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2025-09-19 17:40:55.375991 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=5.43 ms 2025-09-19 17:40:56.375089 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.35 ms 2025-09-19 17:40:57.376198 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.71 ms 2025-09-19 17:40:57.376448 | orchestrator | 2025-09-19 17:40:57.376467 | orchestrator | --- 192.168.112.111 ping statistics --- 2025-09-19 17:40:57.376474 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:40:57.376480 | orchestrator | rtt min/avg/max/mdev = 1.706/3.161/5.428/1.624 ms 2025-09-19 17:40:57.377277 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:40:57.377294 | orchestrator | + ping -c3 192.168.112.119 2025-09-19 17:40:57.389460 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2025-09-19 17:40:57.389476 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=7.40 ms 2025-09-19 17:40:58.385953 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=2.61 ms 2025-09-19 17:40:59.387774 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=2.11 ms 2025-09-19 17:40:59.387877 | orchestrator | 2025-09-19 17:40:59.387893 | orchestrator | --- 192.168.112.119 ping statistics --- 2025-09-19 17:40:59.387905 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:40:59.387948 | orchestrator | rtt min/avg/max/mdev = 2.107/4.038/7.398/2.384 ms 2025-09-19 17:40:59.387961 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:40:59.387972 | orchestrator | + ping -c3 192.168.112.192 2025-09-19 17:40:59.399969 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-09-19 17:40:59.400029 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.54 ms 2025-09-19 17:41:00.397132 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.80 ms 2025-09-19 17:41:01.399080 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.16 ms 2025-09-19 17:41:01.399180 | orchestrator | 2025-09-19 17:41:01.399197 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-09-19 17:41:01.399209 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:41:01.399220 | orchestrator | rtt min/avg/max/mdev = 2.159/4.168/7.544/2.401 ms 2025-09-19 17:41:01.399243 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:41:01.399255 | orchestrator | + ping -c3 192.168.112.126 2025-09-19 17:41:01.410935 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-09-19 17:41:01.410986 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=6.72 ms 2025-09-19 17:41:02.408828 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.64 ms 2025-09-19 17:41:03.408934 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.73 ms 2025-09-19 17:41:03.409021 | orchestrator | 2025-09-19 17:41:03.409032 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-09-19 17:41:03.409042 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:41:03.409050 | orchestrator | rtt min/avg/max/mdev = 1.733/3.697/6.720/2.169 ms 2025-09-19 17:41:03.409162 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-09-19 17:41:06.667604 | orchestrator | 2025-09-19 17:41:06 | INFO  | Live migrating server 7fd2ef52-d999-4639-b62b-c9326f729bf2 2025-09-19 17:41:18.892806 | orchestrator | 2025-09-19 17:41:18 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:21.278723 | orchestrator | 2025-09-19 17:41:21 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:23.629810 | orchestrator | 2025-09-19 17:41:23 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:25.927699 | orchestrator | 2025-09-19 17:41:25 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:28.334380 | orchestrator | 2025-09-19 17:41:28 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:30.586567 | orchestrator | 2025-09-19 17:41:30 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:32.878527 | orchestrator | 2025-09-19 17:41:32 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:35.244361 | orchestrator | 2025-09-19 17:41:35 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:41:37.517133 | orchestrator | 2025-09-19 17:41:37 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) completed with status ACTIVE 2025-09-19 17:41:37.517235 | orchestrator | 2025-09-19 17:41:37 | INFO  | Live migrating server 81208bfb-6a09-40fd-ac3f-a554c1536141 2025-09-19 17:41:49.822784 | orchestrator | 2025-09-19 17:41:49 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:41:52.174264 | orchestrator | 2025-09-19 17:41:52 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:41:54.521415 | orchestrator | 2025-09-19 17:41:54 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:41:56.856948 | orchestrator | 2025-09-19 17:41:56 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:41:59.143535 | orchestrator | 2025-09-19 17:41:59 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:42:01.458342 | orchestrator | 2025-09-19 17:42:01 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:42:03.742926 | orchestrator | 2025-09-19 17:42:03 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:42:06.100901 | orchestrator | 2025-09-19 17:42:06 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:42:08.470115 | orchestrator | 2025-09-19 17:42:08 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) completed with status ACTIVE 2025-09-19 17:42:08.470211 | orchestrator | 2025-09-19 17:42:08 | INFO  | Live migrating server b480628c-9e7b-4709-bbfb-a69a6d3ddad3 2025-09-19 17:42:19.225845 | orchestrator | 2025-09-19 17:42:19 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:21.621846 | orchestrator | 2025-09-19 17:42:21 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:23.927689 | orchestrator | 2025-09-19 17:42:23 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:26.299675 | orchestrator | 2025-09-19 17:42:26 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:28.569794 | orchestrator | 2025-09-19 17:42:28 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:30.861815 | orchestrator | 2025-09-19 17:42:30 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:33.200352 | orchestrator | 2025-09-19 17:42:33 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:35.486916 | orchestrator | 2025-09-19 17:42:35 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:42:37.870186 | orchestrator | 2025-09-19 17:42:37 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) completed with status ACTIVE 2025-09-19 17:42:37.870289 | orchestrator | 2025-09-19 17:42:37 | INFO  | Live migrating server 27757632-bcf8-4008-8bac-5114182ea4a7 2025-09-19 17:42:48.956346 | orchestrator | 2025-09-19 17:42:48 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:42:51.269721 | orchestrator | 2025-09-19 17:42:51 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:42:53.638447 | orchestrator | 2025-09-19 17:42:53 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:42:56.129839 | orchestrator | 2025-09-19 17:42:56 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:42:58.346635 | orchestrator | 2025-09-19 17:42:58 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:43:01.133635 | orchestrator | 2025-09-19 17:43:01 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:43:03.402969 | orchestrator | 2025-09-19 17:43:03 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:43:05.687134 | orchestrator | 2025-09-19 17:43:05 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:43:08.063481 | orchestrator | 2025-09-19 17:43:08 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) completed with status ACTIVE 2025-09-19 17:43:08.063661 | orchestrator | 2025-09-19 17:43:08 | INFO  | Live migrating server 2fddffd0-2cc6-4172-9014-31e19f6b2734 2025-09-19 17:43:20.675801 | orchestrator | 2025-09-19 17:43:20 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:23.084430 | orchestrator | 2025-09-19 17:43:23 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:25.426975 | orchestrator | 2025-09-19 17:43:25 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:27.996386 | orchestrator | 2025-09-19 17:43:27 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:30.277557 | orchestrator | 2025-09-19 17:43:30 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:32.635260 | orchestrator | 2025-09-19 17:43:32 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:34.909112 | orchestrator | 2025-09-19 17:43:34 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:37.216640 | orchestrator | 2025-09-19 17:43:37 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:39.488241 | orchestrator | 2025-09-19 17:43:39 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:41.755389 | orchestrator | 2025-09-19 17:43:41 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:43:44.049119 | orchestrator | 2025-09-19 17:43:44 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) completed with status ACTIVE 2025-09-19 17:43:44.338531 | orchestrator | + compute_list 2025-09-19 17:43:44.338627 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 17:43:47.133994 | orchestrator | +------+--------+----------+ 2025-09-19 17:43:47.134172 | orchestrator | | ID | Name | Status | 2025-09-19 17:43:47.134189 | orchestrator | |------+--------+----------| 2025-09-19 17:43:47.134202 | orchestrator | +------+--------+----------+ 2025-09-19 17:43:47.433690 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 17:43:50.877853 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:43:50.877929 | orchestrator | | ID | Name | Status | 2025-09-19 17:43:50.877935 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:43:50.877939 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | 2025-09-19 17:43:50.877957 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | 2025-09-19 17:43:50.877961 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | 2025-09-19 17:43:50.877965 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | 2025-09-19 17:43:50.877969 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | 2025-09-19 17:43:50.877973 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:43:51.172428 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 17:43:53.940178 | orchestrator | +------+--------+----------+ 2025-09-19 17:43:53.940300 | orchestrator | | ID | Name | Status | 2025-09-19 17:43:53.940319 | orchestrator | |------+--------+----------| 2025-09-19 17:43:53.940333 | orchestrator | +------+--------+----------+ 2025-09-19 17:43:54.233544 | orchestrator | + server_ping 2025-09-19 17:43:54.236049 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 17:43:54.236080 | orchestrator | ++ tr -d '\r' 2025-09-19 17:43:57.374408 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:43:57.374564 | orchestrator | + ping -c3 192.168.112.188 2025-09-19 17:43:57.385633 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-19 17:43:57.385663 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=7.80 ms 2025-09-19 17:43:58.382965 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.54 ms 2025-09-19 17:43:59.384348 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.85 ms 2025-09-19 17:43:59.384526 | orchestrator | 2025-09-19 17:43:59.384544 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-19 17:43:59.384556 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-09-19 17:43:59.384567 | orchestrator | rtt min/avg/max/mdev = 1.852/4.065/7.801/2.656 ms 2025-09-19 17:43:59.384949 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:43:59.385371 | orchestrator | + ping -c3 192.168.112.111 2025-09-19 17:43:59.397064 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2025-09-19 17:43:59.397144 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=7.96 ms 2025-09-19 17:44:00.392842 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.37 ms 2025-09-19 17:44:01.394416 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.54 ms 2025-09-19 17:44:01.394611 | orchestrator | 2025-09-19 17:44:01.394637 | orchestrator | --- 192.168.112.111 ping statistics --- 2025-09-19 17:44:01.394650 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:44:01.394661 | orchestrator | rtt min/avg/max/mdev = 1.536/3.953/7.955/2.849 ms 2025-09-19 17:44:01.395060 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:44:01.395085 | orchestrator | + ping -c3 192.168.112.119 2025-09-19 17:44:01.407150 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2025-09-19 17:44:01.407211 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=7.95 ms 2025-09-19 17:44:02.403058 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=2.90 ms 2025-09-19 17:44:03.403293 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=2.09 ms 2025-09-19 17:44:03.403421 | orchestrator | 2025-09-19 17:44:03.403442 | orchestrator | --- 192.168.112.119 ping statistics --- 2025-09-19 17:44:03.403481 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 17:44:03.403493 | orchestrator | rtt min/avg/max/mdev = 2.086/4.312/7.946/2.591 ms 2025-09-19 17:44:03.403999 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:44:03.404023 | orchestrator | + ping -c3 192.168.112.192 2025-09-19 17:44:03.415291 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-09-19 17:44:03.415350 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.14 ms 2025-09-19 17:44:04.413411 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.84 ms 2025-09-19 17:44:05.413597 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=1.84 ms 2025-09-19 17:44:05.413706 | orchestrator | 2025-09-19 17:44:05.413721 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-09-19 17:44:05.413732 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:44:05.413741 | orchestrator | rtt min/avg/max/mdev = 1.843/3.939/7.135/2.296 ms 2025-09-19 17:44:05.413901 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:44:05.413918 | orchestrator | + ping -c3 192.168.112.126 2025-09-19 17:44:05.425531 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-09-19 17:44:05.425595 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=7.40 ms 2025-09-19 17:44:06.422426 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=2.45 ms 2025-09-19 17:44:07.424526 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=2.13 ms 2025-09-19 17:44:07.424710 | orchestrator | 2025-09-19 17:44:07.424730 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-09-19 17:44:07.424743 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:44:07.424754 | orchestrator | rtt min/avg/max/mdev = 2.130/3.993/7.398/2.411 ms 2025-09-19 17:44:07.424850 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-09-19 17:44:10.852887 | orchestrator | 2025-09-19 17:44:10 | INFO  | Live migrating server 7fd2ef52-d999-4639-b62b-c9326f729bf2 2025-09-19 17:44:22.040690 | orchestrator | 2025-09-19 17:44:22 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:24.383590 | orchestrator | 2025-09-19 17:44:24 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:26.732795 | orchestrator | 2025-09-19 17:44:26 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:29.090786 | orchestrator | 2025-09-19 17:44:29 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:31.359260 | orchestrator | 2025-09-19 17:44:31 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:33.657489 | orchestrator | 2025-09-19 17:44:33 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:36.018081 | orchestrator | 2025-09-19 17:44:36 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:38.326731 | orchestrator | 2025-09-19 17:44:38 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) is still in progress 2025-09-19 17:44:40.599344 | orchestrator | 2025-09-19 17:44:40 | INFO  | Live migration of 7fd2ef52-d999-4639-b62b-c9326f729bf2 (test-4) completed with status ACTIVE 2025-09-19 17:44:40.599468 | orchestrator | 2025-09-19 17:44:40 | INFO  | Live migrating server 81208bfb-6a09-40fd-ac3f-a554c1536141 2025-09-19 17:44:50.827557 | orchestrator | 2025-09-19 17:44:50 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:44:53.141564 | orchestrator | 2025-09-19 17:44:53 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:44:55.469089 | orchestrator | 2025-09-19 17:44:55 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:44:57.732018 | orchestrator | 2025-09-19 17:44:57 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:45:00.117694 | orchestrator | 2025-09-19 17:45:00 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:45:02.429534 | orchestrator | 2025-09-19 17:45:02 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:45:04.774691 | orchestrator | 2025-09-19 17:45:04 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:45:07.051598 | orchestrator | 2025-09-19 17:45:07 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) is still in progress 2025-09-19 17:45:09.423851 | orchestrator | 2025-09-19 17:45:09 | INFO  | Live migration of 81208bfb-6a09-40fd-ac3f-a554c1536141 (test-3) completed with status ACTIVE 2025-09-19 17:45:09.423952 | orchestrator | 2025-09-19 17:45:09 | INFO  | Live migrating server b480628c-9e7b-4709-bbfb-a69a6d3ddad3 2025-09-19 17:45:19.268260 | orchestrator | 2025-09-19 17:45:19 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:21.629712 | orchestrator | 2025-09-19 17:45:21 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:23.964754 | orchestrator | 2025-09-19 17:45:23 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:26.257853 | orchestrator | 2025-09-19 17:45:26 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:28.670139 | orchestrator | 2025-09-19 17:45:28 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:30.973349 | orchestrator | 2025-09-19 17:45:30 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:33.260237 | orchestrator | 2025-09-19 17:45:33 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:35.572628 | orchestrator | 2025-09-19 17:45:35 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) is still in progress 2025-09-19 17:45:37.862258 | orchestrator | 2025-09-19 17:45:37 | INFO  | Live migration of b480628c-9e7b-4709-bbfb-a69a6d3ddad3 (test-2) completed with status ACTIVE 2025-09-19 17:45:37.862336 | orchestrator | 2025-09-19 17:45:37 | INFO  | Live migrating server 27757632-bcf8-4008-8bac-5114182ea4a7 2025-09-19 17:45:48.090606 | orchestrator | 2025-09-19 17:45:48 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:45:50.392915 | orchestrator | 2025-09-19 17:45:50 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:45:52.891959 | orchestrator | 2025-09-19 17:45:52 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:45:55.245144 | orchestrator | 2025-09-19 17:45:55 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:45:57.538481 | orchestrator | 2025-09-19 17:45:57 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:45:59.787665 | orchestrator | 2025-09-19 17:45:59 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:46:02.103909 | orchestrator | 2025-09-19 17:46:02 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:46:04.482466 | orchestrator | 2025-09-19 17:46:04 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:46:06.792553 | orchestrator | 2025-09-19 17:46:06 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) is still in progress 2025-09-19 17:46:09.149108 | orchestrator | 2025-09-19 17:46:09 | INFO  | Live migration of 27757632-bcf8-4008-8bac-5114182ea4a7 (test-1) completed with status ACTIVE 2025-09-19 17:46:09.149219 | orchestrator | 2025-09-19 17:46:09 | INFO  | Live migrating server 2fddffd0-2cc6-4172-9014-31e19f6b2734 2025-09-19 17:46:19.294799 | orchestrator | 2025-09-19 17:46:19 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:21.621288 | orchestrator | 2025-09-19 17:46:21 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:23.967532 | orchestrator | 2025-09-19 17:46:23 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:26.341978 | orchestrator | 2025-09-19 17:46:26 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:28.641466 | orchestrator | 2025-09-19 17:46:28 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:31.007401 | orchestrator | 2025-09-19 17:46:31 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:33.286394 | orchestrator | 2025-09-19 17:46:33 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:35.574713 | orchestrator | 2025-09-19 17:46:35 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:37.860030 | orchestrator | 2025-09-19 17:46:37 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) is still in progress 2025-09-19 17:46:40.196863 | orchestrator | 2025-09-19 17:46:40 | INFO  | Live migration of 2fddffd0-2cc6-4172-9014-31e19f6b2734 (test) completed with status ACTIVE 2025-09-19 17:46:40.500172 | orchestrator | + compute_list 2025-09-19 17:46:40.500265 | orchestrator | + osism manage compute list testbed-node-3 2025-09-19 17:46:43.303456 | orchestrator | +------+--------+----------+ 2025-09-19 17:46:43.303562 | orchestrator | | ID | Name | Status | 2025-09-19 17:46:43.303576 | orchestrator | |------+--------+----------| 2025-09-19 17:46:43.303588 | orchestrator | +------+--------+----------+ 2025-09-19 17:46:43.594013 | orchestrator | + osism manage compute list testbed-node-4 2025-09-19 17:46:46.349181 | orchestrator | +------+--------+----------+ 2025-09-19 17:46:46.349292 | orchestrator | | ID | Name | Status | 2025-09-19 17:46:46.349381 | orchestrator | |------+--------+----------| 2025-09-19 17:46:46.349400 | orchestrator | +------+--------+----------+ 2025-09-19 17:46:46.653931 | orchestrator | + osism manage compute list testbed-node-5 2025-09-19 17:46:49.812797 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:46:49.812895 | orchestrator | | ID | Name | Status | 2025-09-19 17:46:49.812910 | orchestrator | |--------------------------------------+--------+----------| 2025-09-19 17:46:49.812922 | orchestrator | | 7fd2ef52-d999-4639-b62b-c9326f729bf2 | test-4 | ACTIVE | 2025-09-19 17:46:49.812933 | orchestrator | | 81208bfb-6a09-40fd-ac3f-a554c1536141 | test-3 | ACTIVE | 2025-09-19 17:46:49.812944 | orchestrator | | b480628c-9e7b-4709-bbfb-a69a6d3ddad3 | test-2 | ACTIVE | 2025-09-19 17:46:49.812955 | orchestrator | | 27757632-bcf8-4008-8bac-5114182ea4a7 | test-1 | ACTIVE | 2025-09-19 17:46:49.812966 | orchestrator | | 2fddffd0-2cc6-4172-9014-31e19f6b2734 | test | ACTIVE | 2025-09-19 17:46:49.812977 | orchestrator | +--------------------------------------+--------+----------+ 2025-09-19 17:46:50.094650 | orchestrator | + server_ping 2025-09-19 17:46:50.096453 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-09-19 17:46:50.096499 | orchestrator | ++ tr -d '\r' 2025-09-19 17:46:52.979740 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:46:52.979847 | orchestrator | + ping -c3 192.168.112.188 2025-09-19 17:46:52.991690 | orchestrator | PING 192.168.112.188 (192.168.112.188) 56(84) bytes of data. 2025-09-19 17:46:52.991767 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=1 ttl=63 time=9.37 ms 2025-09-19 17:46:53.987005 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=2 ttl=63 time=2.81 ms 2025-09-19 17:46:54.987488 | orchestrator | 64 bytes from 192.168.112.188: icmp_seq=3 ttl=63 time=1.92 ms 2025-09-19 17:46:54.987588 | orchestrator | 2025-09-19 17:46:54.987604 | orchestrator | --- 192.168.112.188 ping statistics --- 2025-09-19 17:46:54.987616 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:46:54.987645 | orchestrator | rtt min/avg/max/mdev = 1.918/4.697/9.366/3.320 ms 2025-09-19 17:46:54.987657 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:46:54.987669 | orchestrator | + ping -c3 192.168.112.111 2025-09-19 17:46:55.000353 | orchestrator | PING 192.168.112.111 (192.168.112.111) 56(84) bytes of data. 2025-09-19 17:46:55.000439 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=1 ttl=63 time=8.24 ms 2025-09-19 17:46:55.996779 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=2 ttl=63 time=2.68 ms 2025-09-19 17:46:56.997916 | orchestrator | 64 bytes from 192.168.112.111: icmp_seq=3 ttl=63 time=1.80 ms 2025-09-19 17:46:56.998013 | orchestrator | 2025-09-19 17:46:56.998076 | orchestrator | --- 192.168.112.111 ping statistics --- 2025-09-19 17:46:56.998087 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:46:56.998098 | orchestrator | rtt min/avg/max/mdev = 1.796/4.236/8.235/2.850 ms 2025-09-19 17:46:56.998108 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:46:56.998119 | orchestrator | + ping -c3 192.168.112.119 2025-09-19 17:46:57.007574 | orchestrator | PING 192.168.112.119 (192.168.112.119) 56(84) bytes of data. 2025-09-19 17:46:57.007600 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=1 ttl=63 time=4.92 ms 2025-09-19 17:46:58.006661 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=2 ttl=63 time=2.10 ms 2025-09-19 17:46:59.008180 | orchestrator | 64 bytes from 192.168.112.119: icmp_seq=3 ttl=63 time=2.12 ms 2025-09-19 17:46:59.008285 | orchestrator | 2025-09-19 17:46:59.008361 | orchestrator | --- 192.168.112.119 ping statistics --- 2025-09-19 17:46:59.008375 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:46:59.008387 | orchestrator | rtt min/avg/max/mdev = 2.098/3.043/4.915/1.323 ms 2025-09-19 17:46:59.008790 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:46:59.008816 | orchestrator | + ping -c3 192.168.112.192 2025-09-19 17:46:59.021231 | orchestrator | PING 192.168.112.192 (192.168.112.192) 56(84) bytes of data. 2025-09-19 17:46:59.021328 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=1 ttl=63 time=7.55 ms 2025-09-19 17:47:00.018423 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=2 ttl=63 time=2.10 ms 2025-09-19 17:47:01.019217 | orchestrator | 64 bytes from 192.168.112.192: icmp_seq=3 ttl=63 time=2.05 ms 2025-09-19 17:47:01.019377 | orchestrator | 2025-09-19 17:47:01.019394 | orchestrator | --- 192.168.112.192 ping statistics --- 2025-09-19 17:47:01.019406 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-09-19 17:47:01.019417 | orchestrator | rtt min/avg/max/mdev = 2.051/3.899/7.551/2.582 ms 2025-09-19 17:47:01.019779 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-09-19 17:47:01.019802 | orchestrator | + ping -c3 192.168.112.126 2025-09-19 17:47:01.032155 | orchestrator | PING 192.168.112.126 (192.168.112.126) 56(84) bytes of data. 2025-09-19 17:47:01.032201 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=1 ttl=63 time=8.10 ms 2025-09-19 17:47:02.027645 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=2 ttl=63 time=1.99 ms 2025-09-19 17:47:03.028238 | orchestrator | 64 bytes from 192.168.112.126: icmp_seq=3 ttl=63 time=1.67 ms 2025-09-19 17:47:03.029285 | orchestrator | 2025-09-19 17:47:03.029417 | orchestrator | --- 192.168.112.126 ping statistics --- 2025-09-19 17:47:03.029435 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-09-19 17:47:03.029447 | orchestrator | rtt min/avg/max/mdev = 1.669/3.922/8.103/2.959 ms 2025-09-19 17:47:03.533562 | orchestrator | ok: Runtime: 0:22:18.838483 2025-09-19 17:47:03.589642 | 2025-09-19 17:47:03.589808 | TASK [Run tempest] 2025-09-19 17:47:04.123536 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:04.141644 | 2025-09-19 17:47:04.141833 | TASK [Check prometheus alert status] 2025-09-19 17:47:04.676738 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:04.680449 | 2025-09-19 17:47:04.680632 | PLAY RECAP 2025-09-19 17:47:04.680799 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-09-19 17:47:04.680871 | 2025-09-19 17:47:04.898988 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-19 17:47:04.901385 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 17:47:05.628775 | 2025-09-19 17:47:05.628933 | PLAY [Post output play] 2025-09-19 17:47:05.645606 | 2025-09-19 17:47:05.645747 | LOOP [stage-output : Register sources] 2025-09-19 17:47:05.708170 | 2025-09-19 17:47:05.708372 | TASK [stage-output : Check sudo] 2025-09-19 17:47:06.544566 | orchestrator | sudo: a password is required 2025-09-19 17:47:06.745854 | orchestrator | ok: Runtime: 0:00:00.015948 2025-09-19 17:47:06.761263 | 2025-09-19 17:47:06.761475 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-19 17:47:06.801695 | 2025-09-19 17:47:06.802019 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-19 17:47:06.869205 | orchestrator | ok 2025-09-19 17:47:06.878026 | 2025-09-19 17:47:06.878150 | LOOP [stage-output : Ensure target folders exist] 2025-09-19 17:47:07.303138 | orchestrator | ok: "docs" 2025-09-19 17:47:07.303574 | 2025-09-19 17:47:07.531252 | orchestrator | ok: "artifacts" 2025-09-19 17:47:07.759320 | orchestrator | ok: "logs" 2025-09-19 17:47:07.779789 | 2025-09-19 17:47:07.779979 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-19 17:47:07.818644 | 2025-09-19 17:47:07.819018 | TASK [stage-output : Make all log files readable] 2025-09-19 17:47:08.092832 | orchestrator | ok 2025-09-19 17:47:08.102102 | 2025-09-19 17:47:08.102243 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-19 17:47:08.137190 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:08.151395 | 2025-09-19 17:47:08.151537 | TASK [stage-output : Discover log files for compression] 2025-09-19 17:47:08.176304 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:08.190606 | 2025-09-19 17:47:08.190828 | LOOP [stage-output : Archive everything from logs] 2025-09-19 17:47:08.238505 | 2025-09-19 17:47:08.238694 | PLAY [Post cleanup play] 2025-09-19 17:47:08.247416 | 2025-09-19 17:47:08.247526 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 17:47:08.298176 | orchestrator | ok 2025-09-19 17:47:08.306998 | 2025-09-19 17:47:08.307109 | TASK [Set cloud fact (local deployment)] 2025-09-19 17:47:08.330973 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:08.342208 | 2025-09-19 17:47:08.342339 | TASK [Clean the cloud environment] 2025-09-19 17:47:08.903919 | orchestrator | 2025-09-19 17:47:08 - clean up servers 2025-09-19 17:47:09.939137 | orchestrator | 2025-09-19 17:47:09 - testbed-manager 2025-09-19 17:47:10.027470 | orchestrator | 2025-09-19 17:47:10 - testbed-node-5 2025-09-19 17:47:10.128757 | orchestrator | 2025-09-19 17:47:10 - testbed-node-4 2025-09-19 17:47:10.217700 | orchestrator | 2025-09-19 17:47:10 - testbed-node-1 2025-09-19 17:47:10.312081 | orchestrator | 2025-09-19 17:47:10 - testbed-node-0 2025-09-19 17:47:10.402438 | orchestrator | 2025-09-19 17:47:10 - testbed-node-3 2025-09-19 17:47:10.489523 | orchestrator | 2025-09-19 17:47:10 - testbed-node-2 2025-09-19 17:47:10.584517 | orchestrator | 2025-09-19 17:47:10 - clean up keypairs 2025-09-19 17:47:10.601270 | orchestrator | 2025-09-19 17:47:10 - testbed 2025-09-19 17:47:10.624889 | orchestrator | 2025-09-19 17:47:10 - wait for servers to be gone 2025-09-19 17:47:19.548545 | orchestrator | 2025-09-19 17:47:19 - clean up ports 2025-09-19 17:47:19.746712 | orchestrator | 2025-09-19 17:47:19 - 1b7d3369-41fa-47a2-b050-f6d54a973478 2025-09-19 17:47:20.012561 | orchestrator | 2025-09-19 17:47:20 - 20c9a96c-317b-473b-9919-afbf5fee4aa2 2025-09-19 17:47:20.306130 | orchestrator | 2025-09-19 17:47:20 - 41d0b8b8-b359-4cce-bf71-5a9728a87d4e 2025-09-19 17:47:20.510428 | orchestrator | 2025-09-19 17:47:20 - 8750fb2d-e3a5-443d-b25d-7170c2c94e26 2025-09-19 17:47:21.228713 | orchestrator | 2025-09-19 17:47:21 - b0a9a23b-e12d-4d39-863b-60351d7b9f62 2025-09-19 17:47:21.811881 | orchestrator | 2025-09-19 17:47:21 - e471a9f6-4016-450d-a81b-c5dcd8afbb8e 2025-09-19 17:47:22.049536 | orchestrator | 2025-09-19 17:47:22 - ffcdef78-a146-459f-a7c1-6ffd06ca9357 2025-09-19 17:47:22.269877 | orchestrator | 2025-09-19 17:47:22 - clean up volumes 2025-09-19 17:47:22.386516 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-4-node-base 2025-09-19 17:47:22.425045 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-1-node-base 2025-09-19 17:47:22.463360 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-2-node-base 2025-09-19 17:47:22.522255 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-manager-base 2025-09-19 17:47:22.562433 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-3-node-base 2025-09-19 17:47:22.604319 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-0-node-base 2025-09-19 17:47:22.647955 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-5-node-base 2025-09-19 17:47:22.696049 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-2-node-5 2025-09-19 17:47:22.737312 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-3-node-3 2025-09-19 17:47:22.782175 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-8-node-5 2025-09-19 17:47:22.825856 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-0-node-3 2025-09-19 17:47:22.864345 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-1-node-4 2025-09-19 17:47:22.902305 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-7-node-4 2025-09-19 17:47:22.951952 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-4-node-4 2025-09-19 17:47:22.993119 | orchestrator | 2025-09-19 17:47:22 - testbed-volume-5-node-5 2025-09-19 17:47:23.040149 | orchestrator | 2025-09-19 17:47:23 - testbed-volume-6-node-3 2025-09-19 17:47:23.083874 | orchestrator | 2025-09-19 17:47:23 - disconnect routers 2025-09-19 17:47:23.243574 | orchestrator | 2025-09-19 17:47:23 - testbed 2025-09-19 17:47:24.777709 | orchestrator | 2025-09-19 17:47:24 - clean up subnets 2025-09-19 17:47:24.832003 | orchestrator | 2025-09-19 17:47:24 - subnet-testbed-management 2025-09-19 17:47:24.990323 | orchestrator | 2025-09-19 17:47:24 - clean up networks 2025-09-19 17:47:25.183900 | orchestrator | 2025-09-19 17:47:25 - net-testbed-management 2025-09-19 17:47:25.494458 | orchestrator | 2025-09-19 17:47:25 - clean up security groups 2025-09-19 17:47:25.539806 | orchestrator | 2025-09-19 17:47:25 - testbed-node 2025-09-19 17:47:25.648072 | orchestrator | 2025-09-19 17:47:25 - testbed-management 2025-09-19 17:47:25.774220 | orchestrator | 2025-09-19 17:47:25 - clean up floating ips 2025-09-19 17:47:25.812985 | orchestrator | 2025-09-19 17:47:25 - 81.163.193.107 2025-09-19 17:47:26.188773 | orchestrator | 2025-09-19 17:47:26 - clean up routers 2025-09-19 17:47:26.303169 | orchestrator | 2025-09-19 17:47:26 - testbed 2025-09-19 17:47:27.393462 | orchestrator | ok: Runtime: 0:00:18.573077 2025-09-19 17:47:27.397582 | 2025-09-19 17:47:27.397766 | PLAY RECAP 2025-09-19 17:47:27.397900 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-19 17:47:27.397964 | 2025-09-19 17:47:27.529796 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-19 17:47:27.532197 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 17:47:28.284003 | 2025-09-19 17:47:28.284169 | PLAY [Cleanup play] 2025-09-19 17:47:28.299875 | 2025-09-19 17:47:28.299994 | TASK [Set cloud fact (Zuul deployment)] 2025-09-19 17:47:28.366395 | orchestrator | ok 2025-09-19 17:47:28.375101 | 2025-09-19 17:47:28.375238 | TASK [Set cloud fact (local deployment)] 2025-09-19 17:47:28.409141 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:28.424772 | 2025-09-19 17:47:28.424907 | TASK [Clean the cloud environment] 2025-09-19 17:47:29.537801 | orchestrator | 2025-09-19 17:47:29 - clean up servers 2025-09-19 17:47:29.999418 | orchestrator | 2025-09-19 17:47:29 - clean up keypairs 2025-09-19 17:47:30.015253 | orchestrator | 2025-09-19 17:47:30 - wait for servers to be gone 2025-09-19 17:47:30.055189 | orchestrator | 2025-09-19 17:47:30 - clean up ports 2025-09-19 17:47:30.126449 | orchestrator | 2025-09-19 17:47:30 - clean up volumes 2025-09-19 17:47:30.185975 | orchestrator | 2025-09-19 17:47:30 - disconnect routers 2025-09-19 17:47:30.208364 | orchestrator | 2025-09-19 17:47:30 - clean up subnets 2025-09-19 17:47:30.229371 | orchestrator | 2025-09-19 17:47:30 - clean up networks 2025-09-19 17:47:30.426218 | orchestrator | 2025-09-19 17:47:30 - clean up security groups 2025-09-19 17:47:30.463936 | orchestrator | 2025-09-19 17:47:30 - clean up floating ips 2025-09-19 17:47:30.493728 | orchestrator | 2025-09-19 17:47:30 - clean up routers 2025-09-19 17:47:30.963957 | orchestrator | ok: Runtime: 0:00:01.347198 2025-09-19 17:47:30.967697 | 2025-09-19 17:47:30.967851 | PLAY RECAP 2025-09-19 17:47:30.967970 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 17:47:30.968033 | 2025-09-19 17:47:31.093829 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-19 17:47:31.096616 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 17:47:31.832012 | 2025-09-19 17:47:31.832176 | PLAY [Base post-fetch] 2025-09-19 17:47:31.847928 | 2025-09-19 17:47:31.848056 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-19 17:47:31.903737 | orchestrator | skipping: Conditional result was False 2025-09-19 17:47:31.919291 | 2025-09-19 17:47:31.919491 | TASK [fetch-output : Set log path for single node] 2025-09-19 17:47:31.957478 | orchestrator | ok 2025-09-19 17:47:31.966290 | 2025-09-19 17:47:31.966415 | LOOP [fetch-output : Ensure local output dirs] 2025-09-19 17:47:32.430686 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/logs" 2025-09-19 17:47:32.706912 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/artifacts" 2025-09-19 17:47:32.965596 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/cd2a4281f1324a0188bc914860afae05/work/docs" 2025-09-19 17:47:32.996551 | 2025-09-19 17:47:32.996798 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-19 17:47:33.886249 | orchestrator | changed: .d..t...... ./ 2025-09-19 17:47:33.886606 | orchestrator | changed: All items complete 2025-09-19 17:47:33.886693 | 2025-09-19 17:47:34.622086 | orchestrator | changed: .d..t...... ./ 2025-09-19 17:47:35.325048 | orchestrator | changed: .d..t...... ./ 2025-09-19 17:47:35.348632 | 2025-09-19 17:47:35.348773 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-19 17:47:35.845171 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.007741 2025-09-19 17:47:36.123817 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.010162 2025-09-19 17:47:36.148147 | 2025-09-19 17:47:36.148267 | PLAY RECAP 2025-09-19 17:47:36.148336 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-19 17:47:36.148373 | 2025-09-19 17:47:36.273328 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-19 17:47:36.275393 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 17:47:36.991882 | 2025-09-19 17:47:36.992055 | PLAY [Base post] 2025-09-19 17:47:37.007404 | 2025-09-19 17:47:37.007614 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-19 17:47:37.932408 | orchestrator | changed 2025-09-19 17:47:37.942718 | 2025-09-19 17:47:37.942915 | PLAY RECAP 2025-09-19 17:47:37.943037 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-19 17:47:37.943145 | 2025-09-19 17:47:38.056912 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-19 17:47:38.057883 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-19 17:47:38.824501 | 2025-09-19 17:47:38.824690 | PLAY [Base post-logs] 2025-09-19 17:47:38.835200 | 2025-09-19 17:47:38.835339 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-19 17:47:39.296548 | localhost | changed 2025-09-19 17:47:39.306518 | 2025-09-19 17:47:39.306652 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-19 17:47:39.343511 | localhost | ok 2025-09-19 17:47:39.348752 | 2025-09-19 17:47:39.348901 | TASK [Set zuul-log-path fact] 2025-09-19 17:47:39.377441 | localhost | ok 2025-09-19 17:47:39.390736 | 2025-09-19 17:47:39.390920 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-19 17:47:39.428400 | localhost | ok 2025-09-19 17:47:39.434772 | 2025-09-19 17:47:39.434973 | TASK [upload-logs : Create log directories] 2025-09-19 17:47:39.940849 | localhost | changed 2025-09-19 17:47:39.945279 | 2025-09-19 17:47:39.945426 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-19 17:47:40.437216 | localhost -> localhost | ok: Runtime: 0:00:00.006905 2025-09-19 17:47:40.446267 | 2025-09-19 17:47:40.446452 | TASK [upload-logs : Upload logs to log server] 2025-09-19 17:47:40.981539 | localhost | Output suppressed because no_log was given 2025-09-19 17:47:40.985932 | 2025-09-19 17:47:40.986147 | LOOP [upload-logs : Compress console log and json output] 2025-09-19 17:47:41.044054 | localhost | skipping: Conditional result was False 2025-09-19 17:47:41.049364 | localhost | skipping: Conditional result was False 2025-09-19 17:47:41.060861 | 2025-09-19 17:47:41.061075 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-19 17:47:41.110198 | localhost | skipping: Conditional result was False 2025-09-19 17:47:41.110903 | 2025-09-19 17:47:41.114031 | localhost | skipping: Conditional result was False 2025-09-19 17:47:41.123575 | 2025-09-19 17:47:41.123827 | LOOP [upload-logs : Upload console log and json output]